Own volume in a mix
Of course this problem does not occour in the first position of the chain. I wouldn't normalize a template or even mask it up because the following remixes need some dB too. :).
By the way, midi-drums are funny if the use a hard panned stereo panorama. The drummer running across the stage to hit a crash-shot. Never seen this with open eyes but hear it with closed eyes. :)
- First because they're very difficult to record and make sound good in the first place!
- Second, they have specific frequencies for the snare, kick, toms and cymbals - they take in the whole frequency range but are one instrument.
- Third is getting their balance right. They need to drive and punch through the mix in order to give the rhythm but they must not override any lead instruments or vocals.
- Fourth is then making them clear in the mix. I like clean-sounding drums.
- Fifth is my speakers because they seem to make the drums sound further forward than they are (that maybe my Celestion bass enhancement speaker having an effect as I cannot control its relative volume - I need a second amp for that really). So, to really try and balance it all, I have to listen through two different sets of headphones, my main speakers and a pair of cheap USB PC speakers. And at various volumes. And in mono. In most cases, I just cannot be bothered to go through all that as it involves plugging and unplugging things!
So yes, I find it very difficult and rarely, if ever, get it right. Sometimes I wish I had an e-kit and had done with but they present their own audio problems for me. I have to use best guesses for my mixes usually, especially as my drums both don't sound the same across recordings (even if I don't move the mics!) and that the volume is far more inconsistent and dynamic between hits than that of an electric kit. It requires skills with EQ, compressors, etc., that I just do not have.
I'm investing in the 'Drum Leveller' plugin this month. It's an ouch at $149 but I'm hoping it'll give me a bit of a better 'professional' drum sound and consistency that I just don't have the skills (or time to learn) to achieve via traditional means. Of course, it's basically 'Autotune' for drums. Whether that's cheating..? I don't know.
Edited by mpointon on 01-03-2016 16:12
it's basically 'Autotune' for drums. Whether that's cheating..? I don't know.
it's no more cheating than gates, equalisers, compressors effects and whatnot that every instrument is using for getting a good mix. That's what it's all about, getting a good song out. For me, I don't care how you do it, it#s the end result that counts.
I guess the better mixing tool is our ears... As Martin has written, the drums are presents in each frequencies range. When I use an already done mix (no separate track available)I first try to work on the drums sounding. It's my basis. If the drums sound, seems like all the instruments sound.
From this point it's hard work to add guitars (in my case)which tune in the same mix colour. There's sometimes too much reverb with some instruments in a mix.
If i have a separate drums take mix, the good level is when I can clearly hear each part of the drums. Hi hat and cymbals details. Then I try to work on a snare slamming EQ fit.
I know that it's not a good option by all my mixes are done using an headphone (One loudspeaker has broken down) . I guess a good one (AKG monitoring K240.
I try to auto-correct my mix getting my bearings. And my reference listening spot is my car ^^
I don't know if my mixes are hearable, but it takes me a lot of time.
Listening time, hesitation, trials ans errors...
Most of us here at the loop, we are "only" musicians, I now understand much more better how difficult and efficient was the job of the sound ingeneers I've worked with...^^
When it comes to guitar sounds there are a number of culprits that make the sound occupy more frequency space than the original guitar sound
1) Distorsion. This does two things: It adds new harmonics and it increases the amplitude of otherwise decaying frequencies (otherwise known as compression) both of these make the sound occupy much more frequency space. The space demand increases as the distorsion gets "harder". Chords bloat much more than single notes, this is the reason why the power chord was invented at same time as distorsion became more popular.
2) Compression. This increases the amplitude of otherwise decaying frequencies and makes the notes occupy larger timeslots
3) Reverb. From a audibility point similar to compression. Increases the time slot of the notes.
4) Modulation effects (chorus, flanger, phaser, rotary etc) These work in a more subtle way spreading a single frequency into adjacent ones
When using any of these one must take into account that they increase the frequency space demand of the guitar sound and this must reflect in the overall arrangement of the whole track.
Equalizing and gating can be used to counteract some of the effects.
@mpointon: check out [url=http://accusonus.com/products/drumatom] drumatom [/url]
Edited by nilton on 01-03-2016 17:30
Here are links to a good, free, early reflection vst, and
a free transient shaper vst.
Look around YouTube for videos about creating depth.
Here's a good one about transients:
I'm sure that once you fiddle a bit, you'll find your answer.
I guess the hardest thing is to find the own volume because our brains think "I want to hear me in front". I definitely need a time break between recording and mix. But I hate to do that break. I need distance to my own recordings. :) Not a law!
My main rule goes like this: In a great mix of a band every track sounds pretty poor if it is soloed for monitoring. :)
If you add to a template remix it to mono and cut the frequencies you need for your own instrument. Then give back what you have stolen until you sound pretty poor. :)
Don't try to do the last steps to early, e.g. compression and reverb clue the performance on a virtual stage. You need fingers and ears no Vst or devices. Less, less, less!!!
Edited by Neronick on 02-03-2016 09:20
these hints are definetly worth consideration if you are aiming to improve your mixes, and the info given by nilton on the effects of frequencies which are being issued by several instruments are really important when choosing sounds in any live / band practise situation.
There's nothing wrong with playing the best available instrument, using the best available cable, microphone and recording device and the machines/VSTs to treat your signals in the best possible way.
I'm with TG here - if it helps improve the sound, why should one not use something? Religious reasons?
If you limit your thinking by any rule like "no VSTs, no devices", your results will show that, and you'll have to invest huge amounts of money into a well sounding studio and extremely expensive mics to compensate, plus you'll have to limit yourself to working with musicians who can make up for your reluctance to mix properly... that is a dead end road if you ask me.
Outside of any technical aspects, I'd like to comment on something social that does strike me about this thread:
It started with a request for advice, and some really good advice was given by several friendly people.
I would have expected to see some appreciation for the time invested to give these responses, instead, you are giving advice yourself in the end, contradicting anyone who offered expertiese on the initial question... for the sake of this being a nice place, please do rethink your use of the WL forum for a minute. Thank you.
yes, you are looking at the administrators signature.
Most things I could tell about recordings have been written by Hank Linderman, "Hot tips for the home recording studio", 1994.
No one has used a bad word in any thread at Wkiloops. This is really a good place to join. No bad comment on any track from nobody.
In a free world with the goal to improve we are allowed to talk about philosphical approaches without condemming any position.
My thanks to everybody who likes telling his way of thinking. My social behaviour tends to inspire creative processes, hopefully. :)
I stumbled across [url=https://www.producelikeapro.com/videos/] This [/url] website about the same time I discovered wikiloops.
I've used a few tips on my bass, though he does cover other instruments.
Edited by Jeebsie on 25-03-2016 07:14
How do you prevent to be too dominant in a sequentiel workflow?
Good question. Maybe after recording, allow a 'cool down' period where the initial enthusiasm can be replaced by a more considered appraisal.
I've heard some tracks where the creators of the template have become low in the mix - I run their track in parallel (being careful to synchronise it) and adjust the balance.
Working with headphones at high volumes, so there's a lot of aspects missing mostly. When done with arrangement, i listen the template at very decent volume, and a few times with addition. Hard to explain the volume level, but it's very close to mute, so you can listen up relaxed to every kind of genre. That's how i fixup my additions and do mixes. Vocals use to be overpowered very often, not my thing, but rhythmG and leads are quite easy to handle that way. Electronic stuff is different, drums and percs are too much mostly. I notice too late, when listening on speakers, prefering headphones anyway
Edited by Frankisaur on 16-04-2016 21:32
Even so, I find that there can be a frustrating gap between what I hear as my finished wave file, and what I hear on wiki loops after I post.
This is a quite common perception, it is however not correct.
Wikiloops serves exactly the mp3-file you uploaded, there is no re-encoding or any other audio related altering of the data done after upload. The "loss" you hear can only be due to the mp3 encoding on your end.
To render your fine wave mix as mp3 and think the mp3 will sound just as good without re-checking will lead to this experience, wikiloops is not to blame here :)
yes, you are looking at the administrators signature.
Initially I didn't really have any idea of how to do this and probably was too loud and up front on every track. Being here (wikiloops) has made me realize (at minimum) these basics. I still can't mix very well, and frankly don't want to put in the hours it would take for each track. My ears aren't that discerning anyway. I hope a number of our fine guitarists take in this basic info. It's often a problem (for me) to hear an otherwise excellent player always mixed too loud and up front, even when they are playing backing.
Sorry about talking basics when most who have posted here are considering fairly advanced techniques. There are a lot of people here that are probably like me and first need to consider those basics starting with "what part am I playing" then giving appropriate attention to how you make it "sound right" Eventually we may all wind up alongside Nilton and p.musgrove talking about " integer multiples of its lowest frequency" and "a free transient shaper vst". Maybe in my next life....
Thanks for the feedback D. I didn't mean to imply that this was an issue with the site, per se. I chalk it up to being a noob with the whole mixing/recording experience. I'm still trying to figure out how the difference is generated...could be some setting in the web browser...I'm just not sure yet. I do recheck...I just don't know what to do about it as of yet.
And always try cut the low end (high pass above say 100 hz) on the vocal track this creates space for bass and kick
Now lower the level of the vocal a bit to where you can still hear the pronunciation.
The freq are examples find your freq where it works on that specific track
I know in reality it is always harder than in theory :)
Just my thoughts have fun mixing! :) :)
Edited by frenzie on 18-01-2017 23:31
"I love the sense of community, the pro attitude, and the spawning of ideas, that WORLDWIDE music collaboration brings. Help Wikiloops be here forever."