Hi, Andy and all readers.

My name’s Martin Kristopher. I’m momentarily studying Audio Engineering and started producing when I was 16, which was in 1996. You can hear my own productions as Trium Circulorum, 3dtorus, Kanal Drei and 3rd Witness on Bandcamp, my website is

Hopefully I can share some useful information on mixing, mastering and general production techniques here in this series. I tend to call my skills expertise by experience. The engineering studies are actually confirming what I did in the last 20yrs as an autodidact, but I’m learning lots of nice pro terms.

I’ll also try to answer questions and comments and will make use of screenshots and photos from my studio.

Part One – Overview
I’d like to talk about mixing first.

A topic that people are often asking me about is how to build THE mixing chain. It’s obviously fun, an art and a science at the same time and I really love to talk about it. I’ll build an exemplary mixing chain to possibly inspire you.

We’ll not only talk mixing chains but mixing in general and stereo mixing compared to mid-side mixing. Also we’re aiming to provide you with detailed mixing tips.

Since I first tried mid-side mixing in 2012 a lot has changed in my perception of sound re/ de-mudded subbass, tonal bass and low drums (e.g. Kicks, Toms, Tumba, Djembe, Timpani). The stereo panorama in this frequency range offers interesting possibilities.

The most important thing (IMHO) I can share on mixing is that everything is allowed. Try out everything you consider possible. Imagine – your DAW is a playground and your parents aren’t there. What counts is sound. Don’t invest more work and DSPs than necessary. Pile up and strip back.

If you’re willing to invest some time in reading, listening and experimenting, then you should possibly keep an eye on Andy’s and my thoughts.

Part Two – Signal Processing Chains
We thought for this article that I (Andy) would start by writing my thoughts on signal processing chains and Martin has added his comments and all of the screenshots.

I’d like to start by saying that in no way do I consider myself an expert, I’m very much learning as I go along. I’d also make a couple of definitions just to make sure we’re talking about the same things. A song is the finished item and is composed of a number of tracks. A signal processing chain is what makes up a track and comprises of a sound source such as a VST synth or sampler and a number of effects that may be applied to that sound source.

Ultimately what we’re trying to achieve in a song is a balance of frequencies, timings and stereo space which we can think of as height, width and depth. I think a lot of the time we are trying to do this intuitively without really appreciating the technical terms involved.

Each track plays a vital part in this overall balance (mixing) which can be finalised further (mastering). It is very important to remember that you need to produce a good quality mix because mastering isn’t magic and can’t fix or rescue a poor quality mix but it can make a good mix better.

The frequency of a sound is measured in Hertz which is the number of cycles per second. Generally speaking we can hear sounds from 20Hz to 20,000Hz but there is considerable variation between individuals.

Each sound source will have a particular frequency but sometimes this isn’t clearly defined, it can be spread across the audio spectrum. Some sounds such as a kick drum and bass have similar frequencies which can sound muddy with no clear definition between them.

You can remove unwanted frequencies using EQ. To make matters more complex, there aren’t really any hard and fast rules as far as EQ goes.

Martin’s comments – “EQs basically are the same as filters/work like filters, the border between these two is not too sharp and I’d rather call EQs a combination of different filter types. For example: a 3 band EQ can consist of a low pass filter, one notch or bandpass and one high pass.

You will most likely have an idea how the settings of the simple 3 band (bands 2 and 5 are bypassed) EQ in the screenshot (Bitwig native) will affect an incoming signal.
It’s set to make a 909 snare sound more “slim” and also to damp the high mids noise”.


Generally speaking you’d typically remove bass frequencies below approx 50 – 60Hz; A kick drum you’d tend to low cut at 50Hz and cut at 450Hz; Vocals you’d tend to low cut at 250Hz and boost at around 2,800Hz; Piano you’d tend to Low cut at 120Hz, boost at 300Hz and cut at 2800Hz. These are pretty much starting points, each mix will have its own requirements.

Let’s face it, we all have timing issues at multiple points in our lives whether it’s asking the boss for a pay rise, asking someone out for a drink or telling the misses or mister you’ve just bought a synth. Again.

Music is no different. Sometimes you may need a rigid 4:4 beat for a techno tune, other times a more relaxed groove will suit the mood better.

If everything sits on the beat your song can sound too mechanical but go too much the other way and it won’t sound coherent.

Back in the 70s when synths first became widely available, their analogue circuitry meant that they didn’t tend to keep their tuning very well. Into the 80s, the first digital synths still had a lot of analogue circuitry and a lot were hand-built so similarly had a tendency to drift and each one sounded a bit different. The upshot was a lot of time spent retuning them. This did give a natural movement in sound and some modern synths such as Synthmaster 2.8 even have a feature to apply this sort of drift to give a more analogue feel.

Timing issues are very common in mixes. A couple of years ago I remixed a Maya Wolff piano song and I can tell you she’s got an outstanding ear, it took a few listens before I heard a timing issue she spotted straight away.

Recently I’ve been using Hollyhock II as my primary DAW which records live. This means that to capture spontaneity I’ve learned to accept imperfections. The results might not be perfect but as long as they’re not horrendous I’m happy.

Stereo field
This is quite a complex topic but essentially is considering the height, width and depth of sounds, like creating a 3d object from a 2d drawing.

When we talk about height, this is often perceived as being low for bass frequencies and high for treble frequencies. This is probably because bass frequencies occupy the lower part of the spectrum whereas treble frequencies occupy the higher part of the spectrum.

Width is the position of the sound in the field whether this is central, left or right.

Depth is the tricky one, it’s whether sounds are close or far and certainly not as simple as adjusting volume.

This is a way of altering the dynamic range between the loudest and quietest parts of the signal. They work by boosting lower volumes and attenuating higher ones. Back in the hardware only days, you would tend to use these for mastering and know the controls inside out. These days it’s all too easy to load one on every track and the question of whether to use them in this way is a hotly debated topic.

One of the most popular effects, is very easy to over use reverb. It creates space in your mix by replicating the acoustics of a given space, whether this is a room, cave or cathedral. Hard surfaces tend to reflect sound whereas soft surfaces tend to absorb sound so a reverb effect is all about trying to reproduce these effects.

A different type of reverb is a convolution reverb which uses a different approach. It works by digitally simulating the reverberation of a physical or virtual space. It does this by using a pre-recorded audio sample of the impulse response of the space being modeled and a bit of maths. Ok, a lot of maths. The result is that you can precisely control the reverb response meaning you can have a cathedral, cavern, bouncing ball or tiny speaker response precisely, every time.

Mixing chains
Martin’s comments – “A special and also very basic type of a signal processing chain is a mixing chain.

I will now line out the basic mixing chain based on simple channel strip plugins which include dynamics/EQ in one plugin like on a classic mixing console.

Examples here are one Waves Audio Renaissance Channel and the wonderful sounding free channel strips by Variety Of Sound (preFIX and NastyVCS).




They mainly consist of the following sections: INPUT SIGNAL
-> gain control
-> highpass filter and a low pass filter (to boldly remove rumbling sounds and high frequencies)
-> EQ (fine tuning on frequencies)
-> Gate/Expander (dynamics)
-> phase correction
-> stereo field control
-> output volume control

Some plugins also offer the option to route the sections in different orders (e.g. INPUT -> dynamics -> EQ -> …) which can also influence the sound significantly.

Depending on what plugins are in your tool box you can of course build your own mixing chain.
One simple chain built of 2 Waves Plugins (actually this is enough most of the time):


Most DAWs offer everything you can possibly need as native contents (definitely you’ll find an EQ and a compressor) – here’s a simple chain of EQ and compressor in Bitwig.


Full mixing chains (pre EQ -> compressor -> post EQ -> stereo field and phasing control), built in Live and Bitwig with native effects.”