OneBlindMouse

joined 1 year ago
[–] OneBlindMouse 3 points 10 months ago

Are you referring to unmaking audio? I use Steinberg's SpectrraLayers Pro as I am a Cubase user and whatever the last installation of it becomes resident as an extension but it does run standalone. I use it for things like audio repair and manipulation.

I think the other 'big name' in the field would be the RX10

None of them are perfect and it can be quite tricky to isolate to a forensic depth but I also know that SpectraLayers has better tool customisation and thresholds and also better layer management.

I suppose, like most audio things, people will tell you that the one they use os best so I wouldn't;t just take my word for it.

[–] OneBlindMouse 2 points 1 year ago (1 children)

I rarely use a de-esser on vocals. Rather I actually manage the types of noises they reduce manually on the waveform. It's easy to recognise things like 'sss' and 'fff' by their noise look and then they can be dropped by as much as 4db using a trim function.

Sure, it can be slow and it means sectioning but it's just a better option for me as a de-esser can treat some of the noise well and make others sound like a lisp. They're a. bit all or nothing.

If I do apply a de-essing it's the Cubase stock one and it's always a gentle application and always after the first compression.

[–] OneBlindMouse 2 points 1 year ago (1 children)

Yeah, I guess that's a useful ,thing to have. It seems that not many people even use a reference track these days for their mix. I do still use them and when I'm mixing for other people ask them if they have one... just to get an idea of what they're looking for. If Ozone works, it has to be a good thing. I just don't trust it, to be honest.

[–] OneBlindMouse 3 points 1 year ago

AI can definitely 'see' things I can't in a spectral layer. It's not perfect, none of them are but mopping up after them is getting easier as they improve. I just know theres going to be a day that I can't distinguish between a human tune and an AI one and find that terrifying.

Thankfully, neither you nor I are making that kind of modern 'homogemastered' mainstream stuff.

[–] OneBlindMouse 2 points 1 year ago

The faucet (tap... I'm English) analogy is perfect and yeah... there are so many of us making music now that the arena is literally stuffed... maybe AI generated has a place..? I dunno. Not for me... yet.

[–] OneBlindMouse 2 points 1 year ago (3 children)

I don't work in a typical or mainstream genre either. My own mixing methods are unorthodox and I generally master 'un-loud' so things like Ozone wouldn't help me anyway. Guides to me are still reference tracks but yes, I see them as helping a great deal in some production for some people.

[–] OneBlindMouse 1 points 1 year ago

I think that's probably the best use of it... as some kind of guide.

 

A conversation popped up on another platform about the role of AI in music production, generally as its used in the mastering process. Now I'm not sure how much AI that actually involves and see it more as a set of rules that will map your song or music to a contemporary 'good mix'... basically control the EQ, RMS peak and LUFS. Things like this are becoming more and more prominent on music histinf sites.

I do use AI in some processing as I use software like Steinberg's SpectraLayers to 'un-layer' and un-mix tonal qualities, and so on but I don't use it in mastering. I do that the old fashioned way.

Your thoughts..? Yay or nay..?

[–] OneBlindMouse 2 points 1 year ago* (last edited 1 year ago)

When you say 50%, are you referring to the 'middle' of the frequency curve..? Try separating... low and high pass at about 150-200Hz then centering the low, keeping it clean and adding some kind of saturation to the high then panning two of that, not mad or hard L/R. If the bass conflicts with the kick for space, give the kick priority, either using dynamic EQ or multiband on a side chain.

There is no right or wrong here, just 'what works' but finding the sweet spot in these strategies might help.

Someone below mentioned double tracking the guitar by replaying. This is a good idea but make sure your timings are hitting, especially on supporting 'power' chords, otherwise you'll also lose punch in the final mix. If you're double tracking, listen in mono too. You will possibly have phasing issues.

Enjoy.

[–] OneBlindMouse 2 points 1 year ago (2 children)

Definitely keep the things like vocal bass, kick etc straight down the middle. You could consider sending the guitar to a separate bus, adding some soft effects and then panning those. Depending on the tone of your bass, you can duplicate it, high pass one and low pass the other, send the low down the middle and slightly pan the brighter track. You could achieve quite a bit of width just doing this and without recourse to stereo imaging, which of course you can still use.

[–] OneBlindMouse 3 points 1 year ago

I'm an ageing English human hobbyist musician and pretty much learn 'on the hoof' and usually by making lots of mistakes. I'm an experimentalist and just enjoy what audio is doing at a given time and in the presence of other audio.

I'm not from a musical family or anything, have no training but I don't think I'm a useless producer... I just need to continue learning stuff