Tag Archive for: sound design

Mid Side Processing Explained

Often when I listen to tracks in my coaching group, I notice that the mid/side processing is often really off. Not having a solid M/S mix makes mixes sound thin, and muddy, rather than expansive and crisp. It’s often the M/S that is the make or break between an ok mix and a radiant one. Therefore, it felt prudent to write an article on what mid/side processing is, and how producers can have it done properly in the mix. Therefore, without further ado, here is Mid / Side processing, explained. 

 

What Is Mid/Side Processing? 

So what is Mid/Side processing? Basically, if you want a wide-sounding mix, you’re going to have to concentrate on mid/side processing. Often these sort of mixes sound “better” to the listener, and allow the producer to throw more sounds in their mix, without it sounding cluttered. While a wide-sounding mix can be accomplished through a bunch of different panning and stereo processes, mid/side is a strategy that can really dial that in, and create a spatial mix.

 

Mid in Mid/Side Processing, Explained

The “mid” part of the mid/side process is basically mono. It’s the sound(s) that sits in the center of the mix. Kick drums, snares, melodies between 200 – 500hz (like a pad), and any other “static” sounds can, and often should be placed in mid. Sure, there are artistic exceptions, but this is a good rule of thumb. 

Also, bass below 100hz. This is best practice. Why? If you print a vinyl, and if the bass is in stereo, the needle will jump. Also, in clubs, they do serial summing, where anything under 100 will be summed into 1 mono signal, but if your bass is in stereo, and it’s summed, it can quiet phased parts of the mix.

 

Side in Mid/Side Processing, Explained

The side-channel is the edges of the mix. Note: This is not to be confused with panning, where you can move sounds specifically into the left or right stereo field. Side processing is strictly hard to the right, or hard to the left, and is technically a mono signal.

When the side’s amplitude is increased, the listener hears a wider, fuller sound. A good way of using it is to increase the width of leads, or strategically move a percussion bus to the sides of a mix to create a fuller listening experience.

You can even get creative with this, and widen parts of the mix at different intervals in the song. For instance, whenever the chorus comes in, you can widen the leads on it, to give it a more present feeling, allowing it to become more expressive to the listener.

Pads are great for the sides as well, since it’s audio that “hugs” you, in a way. Other things that work well on the side are background noises, like field recordings, or weird ambiance -, that stuff works well on the side, it’s not present. Only present stuff should be in the middle. All decorative percussion can technically be on the side – swingy hi-hats, bells and whistles.

 

Side Processing May Cause  Phasing

Once mid/side processing is explained to many newbies, often they just go out and start messing around with it. However, side processing can reveal one of electronic music’s most dastardly foes: phasing. Basically what happens with phasing is when you have two of the same sound, on opposite sides of the stereo field, they cancel each other out. That means, we have to be judicious with the sounds we put on the sides. Generally, “less is more” is a good approach when dealing with phasing since there are fewer chances of frequencies canceling each other out. 

 

How To Correct Phasing

If you want to correct phasing while keeping them in stereo, the trick is to have one of the sounds reveal itself immediately after the other, so they don’t phase. This can be done with a very short delay. When dialed in, the sound will perceptually happen at the same time, but be delayed ever so slightly, allowing the other sound to peek behind the other one and be heard.

A more immediate, definitive way to correct phasing is to make the sound more mono. There is a tool you can use, called SPAN. This plugin allows you to see in yellow, mono, and in red, side signal. When the red goes beyond the yellow, you have to reduce. The tool you use to fix this is the utility plugin, native to Ableton. You can control the width in this. If you want it more mono, you just adjust the width down, and then turn the volume up. 

mid side processing can be explained well with the VST SPAN. Here's a photo of it.

However, let’s say you have a purely mono signal that you want to add some subtle stereo width to. There are certain effects that can impact this. You can use reverb with little decay (otherwise it will be too loud). 

You can also use a chorus. Eventide made a harmonizer that is beautiful for that. It’s two delays – left and right – and when you play with the delay of each other, it creates a weird signal/shape, and then you can play with the wet/dry to add degrees of stereo. However, if you don’t have the money, you can use the echo delay, and control the left-right, and create a very short delay to create a little more phasing and the width you can play with opening and closing it.

 

EQing in Mid/Side Mode Is A Must

In my opinion, all EQing should be done in MS mode. Sometimes people hear things that they don’t like in the mix, and if you just cut, you are cutting both the left and the right at the same time. However, sometimes you want a sound to be EQ’d differently, depending on the channel that it’s in.

For instance, let’s say you have a synth in your left channel, and it doesn’t exist as much in your right channel. When placing decorative percussion, there will most likely be a crossover in the panning.

But since the synth is primarily in the left channel, the percussion in the left channel is going to have to be EQ’d different to not conflict with that synth. However, since there is all this open space in the right channel, there is no need to EQ out some of the frequencies, allowing that sound to better express itself.

Fabfilter ProQ3 allows you to easily enter MS mode for EQing, and make precise cuts to the sound. If you don’t have ProQ3, you can unlink the left/right in Ableton as well. On EQ 8, there is a mode called stereo, but you can unlink left/right by clicking edit and then selecting left or right. You can also switch it to MS (Mid/Side), where you can edit either the mid or the side or you can treat left/right independently. When you do this, your sound feels more organic, because you’re not cutting in one place. 

A photo of ProQ 3 which has a mid/side processing mode.

More Plugins That Impact Width and Phasing:

 

Panman

mid side processing explained through the vst Panman. This is a photo of that VST.

 

 

 

 

 

 

 

 

PanMan really splits open the possibilities of panning. First and foremost, it’s a hardware emulation, which allows producers to mimic the syle of vintage hardware panning gear. You can also trigger panning if the track hits a certain parameter. The automation allows you to generate complex rhythms and stunning sweets.

 

Microshift 

This is an image of Microshift, a great plugin for modifying your stereo field

Need some width? Well Microshift’s got width. It provides you 3 separate kinds of stereo widening in just a single button push. It uses a specific algorithm to pitch shift and add delay to your sound, that morphs over time to generate brilliant stereo width. It’s very easy to play around with and can be used to give more flavor to instruments, or create nice blends.

 

MStereoGenerator

an image of MStereoGenerator, an excellent plugin for stereo imaging

With MStereoGenerator, you can convert mono recordings into stereo (or even surround). MStereoGenerator is a unique natural-sounding mono to stereo (or even surround) expander, which makes your tracks sound wider, stronger and punchier.  It’s especially good for acoustic instruments. 

 

Panshaper 3 by Cableguys

An imagine of Panshaper, which allows you to do crazy stuff with MS Processing and panning.

PanShaper 3 takes control over your stereo field to another dimension. The real-time LFO that can be drawn on every band and the envelope follower allow you to design evolving, dynamic pan patterns and make dialed-in stereo edits in seconds.

 

Energy Panner

an image of Energypanner which allows for dynamic panning responsiveness to inputs

Energy Panner reacts to the sound intensity by moving in response to it. A drum kit that moves to the beat, synth notes that move on attack, and many other behaviors are possible. Whether it’s stereo or Dolby Atmos, Energy Panner is a plugin you shouldn’t be without.

 

Width Shaper 2 by Cableguys

an image of the vst WidthShaper 2 which allows for amazing stereo mid/side processing.

With WidthShaper 2, you can fine-tune your stereo image to the finest detail. With three mid/side stereo adjustment bands, each with its own drawable LFO and envelope follower, you can gain precise control over the sound. It is perfect for sound design, mixing, and mastering, and can be used on single tracks and buses.

Once you have mid/side processing explained to you, you can see there is way more to stereo than just left and right. With M/S EQing you can surgically cut into sounds, and make them fit precisely in a mix. You can expand and retract sounds at different points in your mix, creating those illusionary, almost psychedelic effects in music that are almost inexplicable, since they are best described as space, rather than music.

However, with this power, comes the responsibility of not phasing your sounds out, and destroying the punch of your songs. Keeping in mind space, and how sounds relate to each other is a paramount skill in music production, and often an overlooked aspect.

I understand this can be complicated. If you need coaching or you just want to delegate this process to me, I’m available to help. Check out all of my services here.

VCV Rack Ideas And Meditations

If you’re not familiar with VCV Rack you should be. While modular synthesis is an expensive hobby, usually reserved for people with cash flow, the open source, mostly free VCV rack is democratizing people’s access to this amazing creative tool. VCV Rack acts as a Eurorack DAW, allowing you to build complex patches using a variety of free, or premium modules, often based on existing hardware. In this post, I won’t talk so much about how to use VCV Rack, as there are tons of tutorials already on it. Instead I will talk more about how it has inspired me, VCV Rack ideas, and how you can use it to inspire yourself.

 

VCV Rack Ideas Are Meditative

The first benefit I noticed from VCV Rack was how meditative it is. I have been an avid practitioner of traditional meditation for many years now, and the similarities are striking. It forces me into a state of flow, one that often results in the dissolution of all outside distractions. A state where time and space become irrelevant, and I am concentrated on a singular purpose. 

A lot of people conflate meditation with a silencing of thoughts, when that’s far from the truth. We’re human; we’ll never be able to silence our mind. Instead, we have to funnel those thoughts into a singularity. A singularity where you accept you have no control over what happens, both of which VCV Rack ideas do for me.

 

The Random Beauty Of VCV

That’s the beautiful thing about modular. These electrical circuits are so finicky that often what you aim to create and what comes out are often vastly different. It’s that sense of unpredictability that makes it so wonderful, and thus, meditative.

Surprisingly this sense of randomness and unpredictability isn’t lost in a digitalized version of modular. Somehow the community of open source developers have kept this aspect true to form, demonstrating the increasing dissolution between analog and digital as technology advances.

Many of you probably don’t know this, but I was an actor, before I was a musician, and even went to school for it. While you have a script, the fine art of acting was always in the slight improvisations. Lines often wouldn’t come out as intended, and you would have to react in the moment. This would often lead to beautiful accidents, far superior to the original script. 

In a sense, the life that exists in modular is very much like a fellow actor on stage. It may have its lines, but ultimately, it will improvise, and it’s up to you to harness this improvisation and redirect it to something evolutionary.

 

photo of vcv rack ideas

 

VCV Rack Tutorials Are Fun

The second thing I noticed about VCV Rack was that it made me appreciate tutorials again. If you’ve been a power user of Ableton for as long as I have, you may find that Ableton tutorials an get super cookie cutter, and formulaic. This often results in stale music that sounds like everyone else who watched the same tutorial.

This may be one of the reasons why the sound we hear is becoming more and more homogenized, and less innovation is seeping through the cracks. This is despite an exponential increase in the number of musicians. However, with VCVRack, the tutorials seem more personalized. Even if you try to recreate something precisely, you’ll almost never get the same result, due to the nature of modular. The sheer amount of VCV Rack ideas you can get from these tutorials is incredibly exciting, and constantly motivating.

 

VCV Rack Tutorials Help You Build Existing Hardware

Another fun aspect of tutorials is that there are some people out there, like Omri Cohen, who build faithful reconstructions of existing pieces of hardware. The one that I linked to is for the Moog DFAM, a semi-modular percussive sequencer that allows you to build really wild, synthesized patterns. 

Normally a DFAM runs for about $700 USD, which isn’t a small investment. While following these tutorials, you not only learn the ins and outs of the DFAM by building it piece by piece, but you also get to try it before you buy it, in a sense. 

This allows you to figure out your setups and configurations for your hardware studio, without having to buy things, and send them back if they don’t fit. Also, since it’s fully modular, you can in theory connect multiple recreated devices together. This allows you to see how they will perform as a hardware version. So let’s say you wanted to see how a DFAM would interact with a Moog Subharmonicon. This is now possible through VCV Rack, for the low low cost of zero dollars. These are just some of the hardware VCV Rack ideas found in tutorials.

photo of the dfam vcv rack ideas

VCV Rack emulates the Moog DFAM

VCV Rack Changes How You Listen To Music

The third thing I learned while formulating VCV Rack ideas is that it changed how I listened to music. Normally I have a nerdy way of listening to music from a sound engineer point of view, where I analyzed EQ, compression, stereo spread, etc, but now when I listen I notice the changes in the patterns, and how things modulate. 

While toying around with VCV Rack ideas, I start thinking about the patches that make the sounds. For instance, what waveform is controlling the FM, or how the envelope routes to the VCA. Or what hooks up to the clock, or how many VCOs are in play. Still, super nerdy, but it adds another dimension to the whole thing, which in turn, stimulate more VCV Rack ideas.  

 

VCV Rack Ideas Help You Build Your Real Modular Setup

The fourth benefit I got from playing with VCV Rack ideas is that just like it allows you to build existing pieces of hardware to test in your setup, it also acts as a way to test modules for your real modular setup. There are a bunch of faithful emulations of existing modules. For instance, the Audible Instruments line is a software emulation of Mutable Instruments. There, you can get clones of things like their Clouds or Tides modules and test them in your setup.

Other examples include:

 

  • Lateralus: Hybrid diode/transistor ladder. This models itself after the Roland filter circuit where they added a few alterations to get close to the Moog circuit.
  • Vorg: Single segment of the filter circuit of Korg MS-20.
  • Ferox: CMOS filter based on the circuit of the CGS749.
  • Nurage: The Nurage bases itself on the circuit of Thomas White LPG which bases itself on the Buchla LPG.
  • Vortex: this bases itself on the circuit of the Polivoks (Erica Synths version).
  • Tangents: This bases itself on the Steiner-Parker version of Yusynth. The three models are variations of the same circuit.
  • Stabile: based on the textbook Stave-Variable filter circuit (linear version).
  • Unstabile: Nonlinear State-Variable filter with low voltage simulation

 

vcv rack ideas real clouds

The real Mutable Instruments Elements

vcv rack ideas photo

VCV Rack that includes Mutable Instruments emulations, aka Audible Instruments

 

VCV Rack Is Constantly Improving

Since it’s community supported, it’s constantly evolving, with new modules being added frequently. Most are free, too! 

Some people will complain that it doesn’t integrate into other DAWs in a smooth fashion, but VCV has heard the complaints and in 2.0 the rumor is that they will have this integration. 

These are just some of my observations about VCV Rack and its amazing ability to spark creativity. In truth, the sheer amount of VCV Rack ideas and inspirations you can get from those VCV Rack ideas are quite staggering. If you have any questions about it, or want to share your experiences with it, feel free to contact me, or make a post in Pheek’s Coaching Corner and let the community know what you think!

Sound Design and Arrangements Series Pt. 3: Repetition

This post is part of a series: Part 1 | Part 2 | Part 3

This post focuses on how I approach repetition in my music, as well as how I perceive it when working on clients’ music. While this is a very obvious topic for electronic music oriented towards dance, where patterns repeat, I understand that as an artist, it can be a very personal topic. Each genre has a way of approaching repetition, and if you’ve been browsing this blog, you will recognize some concepts previously covered that I’d encourage you to look into in more detail. I’d like to approach repetition in music by reviewing your workflow to avoid wasting time on things that can be automated.

Tempo

Using tempo to deliver a message is a very delicate subject. Often before I played live in a venue, I would spend some time on the dancefloor and analyze the mood and the dancers’ needs. I’d check out what speed a DJ’s set was, how fast he’s mix in and out, and the reaction of the crowd. It has always surprised me how playing at 122 BPM vs 123 BPM can shift the mood; I really can’t explain why. But when I’d make a song, I’d keep in mind that DJs could speed it up or slow it down—an important factor affecting energy. I find that increments of 5 make a huge change in the density of the sound in the club. If you slow down very complex patterns, the sounds have room between themselves which also gives the listeners to perceive the sound differently.

Whatever tempo you’ll be using, I highly recommend that you look into using gating for your short percussion or use an envelope maker like Shaperbox 2 to really shape the space between your sounds and have some “white space” between each of them. If you go for a dense atmosphere, I would recommend that you use very fast release compression and make use of parallel compression as well to make sure you’re not over crowding your song.

Sound Repetition

Once we find something we love, we tend to want to repeat it for the entire length of a song. This is, of course, a bit much for someone who listens to it. People expect change—for sounds to have variants and to be sucked in with perhaps something unexpected from the sound. Also, John Cage would disagree and suggest that an idea could be repeated for 10 minutes and the listener would be liking it, but I honestly haven’t heard many songs (through experience or work) that kept me that interested for that long.

The question is, how frequently can an idea be repeated?

It depends of a lot of factors, and while I don’t claim to know the truth, there are techniques to keep in mind. I’d like to teach you how to learn the best way for your music. Let me explain some of my own personal rules—my “reality check” for the validity of a song and the questions around repetition.

First impressions never fail: This is really important. 99% of people I work with start losing perspective and trust in their song’s potential by doing extended sessions on production. This means, when you first open a project you worked on, what hits you at first is what you should fix in that session. Once this is done, save it under another name and then close it. If you can space your sessions out by a few days or weeks (best option), then you can check your first impression of the song again and see if there’s something new clashing.

Hunting for problems will haunt you: There’s always something to fix in your song. Even when you think it’s done, there will always be something. At one point, you have to let go an embrace imperfection. Many people fall into the mindset of searching for problems because they think they missed something. Chances are, they’ll be fixing unnecessary things. What you actually think you’re missing will be details that are technically out of your current knowledge. Usually I do what I call a “stupid check” on my music which is to verify levels, phase issues, clipping and resonances. The rest is detail tweaking that I do in one session only. After that, I pass it to a friend to have his impression. Usually, this will do it.

Listen with your eyes closed: Are you able to listen to all of your song with your eyes closed upon first listen? If yes, your repetition is working, otherwise, fix, then move on.

Generating Supportive Content and Variations

In music production mode, if you want to be efficient and creative, you need to have a lot of different options. So let’s say that your motif/hook is a synth pattern you’ve made, what I would suggest is to have multiple variations of that.

In this video, Tom showcases a way of working that is really similar to how I work (and how many other people work). It’s something that is a bit long to do but once you switch to create mode, it becomes really fun and efficient. The only thing is, I personally find that he’s not using repetition enough, and while this is super useful for making short, slower songs that have a pop drive like in the video, it is not great for building tension. Too much change is entertaining, but you really have to flex your creative muscles to keep it engaging. I would rather have a loop playing to the point where the listener goes from “it should change now” to “I want this to change now.” So perhaps there will be a change after 3-4 bars in your loop. This is up to you to explore.

How do you create variations?

There’s no fast way or shortcut, creating good variations takes time and patience. It also take a few sound design sessions to come up with interesting results. To do this, randomizing effects is pretty much the best starting point and then you tweak to taste.

  1. MIDI Tools – The best way to start editing, is to start by tweaking your MIDI signal with different options. The MIDI tools included in Ableton at first are really useful. Dropping an arpeggio, note length change, or random notes and chords are pretty amazing to just change a simple 2-note melody into something with substance. One plugin that came out recently I’ve been very impressed with is Scaler 2. I like how deep it goes with all the different scales, artist presets (useful for a non-academic musician like me) and all the different ways to take melodies and have templates ready to be tweaked for your song. One way to commit to what you have is to resample everything like Tom did in his video. Eventually, I like to scrap the MIDI channel because otherwise I’ll keep going with new ideas and they’ll probably never be used. If you resample everything, you have your sound frozen in time, you can cut and arrange it to fit in the song at the moment it fits best.
  2. Audio Mangling – Once you have your MIDI idea bounced, it’s time to play with it for even more ideas. There are two kind of ideas you can use to approach your movement: fast tweaks or slow. When it comes to fast event, like a filter sweeping or reverb send, I used to do it all by hand; it would take ages. The fastest way out there is to take a muti-effect plugin and then randomize everything, while resampling it. The one that I found to be the most useful for that is Looperator by Sugar Bytes. Internally you can have random ideas generated, quick adjusting, wet/dry control and easily go from very wild to mellow. It’s possible to make fast effect tweaks (common to EDM or dubstep) but slower too. Combine this with the Texture plugin to add layers of content to anything. For instance, instead of simply having a background noise, you melt it into some omnipresence in the song so it can react to it, making your constant noise alive and reactive. The background is a good way to make anything repetitive, feel less repetitive because the ears detect it as something changing but it constantly moves its focus from foreground to background.
  3. Editing – This is the most painful step for me but luckily I found a way to make it more interesting thanks to the Serato Sampler. This amazing tool allows, like the Ableton sampler, to slice and map, and rearrange. You can combine it with a sequencer like Riffer or Rozzler (Free Max patch) to create new combinations. Why Serato instead of the stock plugin? Well, it’s just easy—I just want to “snap and go”, if you know what I mean, and this demands no adjustments.

Editing is really where it you can differentiate veteran from rookie producers. My suggestion to new comers would be a simple list of different ideas.

  • Decide on internal rules: Some people like to have precise rules that are set early in the song and then that will be respected through the song. I do it because it helps me understand the song’s idea. If you change too much, it may fall in the realm of “experimental” and maybe this isn’t what you had in mind. Every now and then, when booked for track finalization, people have a problem with the last third or quarter of their song. They lose focus and try to extrapolate or create new ideas. If you create enough material in the beginning, you’re going to make the last stretch easier. But when people are lost, I usually listen to the first minute of the song and go “let’s see what you had in mind at first” as a way to wrap it up around that logic. Basic rules can be created by deciding on a pattern and a series of effects that happen, more or less, at the same time, or a sequence of elements or sections. Pop has very precise rules for sections, while techno “rules” are more related to the selection of sounds and the patterns created.
  • Process, process, process: If I have one channel of claps or a different sound, I want to have variations of it, from subtle to extreme. Why? Because even simple ones are going to make a difference. It’s what makes a real human drummer feel captivating (if he or she is good!), because their playing slightly changes each time, even when playing a loop. Looperator is a good tool but you could also use the stock plugins and just use the presets to start with and resample, move knobs as you process and you can get some nice effects already.
  • Duplicate everything: Each channel should have duplicates where you can drop all your wet takes. You can put them all on mute and test unmuting to see how it goes.
  • MIDI controllers for the win: Map everything that you want to tweak and then record the movements of yourself playing. Usually will give you a bit of of a human feel compared to something created by a mouse click. You want to break that habit.
  • Use your eyes: I find that working with the clips visually and making patterns is a good way to see if you are using your internal rules and see if you use too many sounds.

Now, after all this, how do we know if a song’s repetition is good enough, and how do we know if it’s linear?

Validating with a reference is quick way to check, but if you take breaks and distance your sessions, that would be effective too. But the internal rules are, to me, what makes this work properly. I think the biggest challenge people face is that in spending too much time on a track they get bored and want to push things, add layers, change the rules and what perhaps felt fresh at first will be changed to a point where you’re not using the repetition principle to its full potential. The best example of someone being a master of repetition is Steve Reich and his masterpiece Music for 18 Musicians. There’s nothing more captivating of how one can create so much by playing with repetition.

Some effects in here would be reproduced with delays, phasers, the delay on the channel and such. You can also use the humanize patch to add a bit of delay randomly. I would strongly encourage you to listen to this a few times to fill yourself up with inspiration.

Sound Design and Arrangements Series Pt.1: Contrast

I’ve been wanting to do a series of posts about arrangements because I’m passionate about this aspect of music production, but also because I noticed many of the people I work with struggle with arrangements in their work. There are so many different approaches and techniques to arranging—everyone has their own, and that’s sort of the goal I’d like to drive home in this series. I invite you to make a fresh start in developing a personal signature, aesthetic, vocabulary, and personality.

This post is not for people who are just beginning with arrangements, but if you are, it still contains information that could be interesting to consider down the road.

What do I Mean by “Contrast” in the Context of Arrangements?

In design, contrast refers to elements (two or more) that have certain differences, and their differences are used to grab attention or to evoke an emotion. When I teach my students about contrast, the easiest example to understand and summarize this concept is a difference of amplitude (volume). In movies, to create surprise, excitement, or tension, the amplitude will be low, and then rise either quickly or slowly, supporting the images in the emotion that is present.

In many electronic music songs, we have heard (too often) noise used as a rising element to create a tension. Noise builds became a caricature of themselves at some point given their overuse—but it’s a good example, nonetheless.

How is Contrast Used in Sound Design?

I spend my days working with musicians—contrast comes into play in different circumstances.

Within a single sound, it can be fast or slow changes from one extreme to another. I like to visualize this by analyzing a sound through different axes to help me understand what can be done to it.

  • Attack: Does it start abruptly or slowly?
  • Decay/Amplitude: Does it get really loud or is it more subtle?
  • Frequency/Pitch: Is it high, medium, low?
  • Release/Length: Short – Medium – Long – Constant?
  • Positioning: is it far or near? Low or higher in front of me?

Good contrast, generally, is to have two extremes in some of these domains. Think of a clap in a long reverb, as an example of how a super fast attack with a long release can create something unreal, and therefore, attention-grabbing. A sound that changes pitch is another form of contrast, as we go from one state to another.

Another way of thinking about contrast is to think about how pretty much all complex sounds are the combination of multiple sounds layered. When done properly, they feel as one, and when it’s done with contrast, the contrasting layer adds a movement, texture, or something dynamic that revives the initial sound. Of course, short sounds are more difficult to inject contrast into, but if you think of a bird’s chirp, which is basically the equivalent of a sine wave with a fast attack envelop on the pitch, it’s sounds are short but incredibly fast moving, too.

If you think about using contrast within a sound itself, the fastest way to make this happen is to use a sampler and really take advantage of the use of envelops, mod wheel assignment, and of course LFOs, but it’s really through the use of the envelops that you’ll be able to produce a reaction to what’s happening, sonically.

As I mentioned, the easiest way to produce contrast is by using two sounds that different characteristics, for example, short vs. a long, bright vs. dark one, sad vs. happy, far vs. close, etc. When you use two sounds, you give the listener the chance to have elements to compare, and the ear can easily perceive the difference.

When you select sounds to express your main idea, think of the characteristics in each sound you’re using. Myself, I usually pick my sounds in pairs, then in batches of four. I’ll start by finding one, and the next one will be related to the first. I’ll keep in mind the axis of both sounds when I select them and usually start with longer samples, because I know I can truncate them.

In the morning I usually work on mastering, and in the afternoon, I’ll work on mixing. The reason is, when you work on mastering, you get to work on all kinds of mixes; they have issues that I need to fix to make the master ready for distribution. By paying attention to the mix, I often deal with difficult frequencies and will spend my time controlling resonances that poke through once the song is boosted.

When I’m mixing, often I deal with a selection of sounds that were initially picked by the producer I am working with. The better the samples, the easier will be the mix and in the end, the better the song will feel. What makes a sound be great comes from different things:

  • Quality of the sample: clarity, low resonances, not compressed but dense, well-balanced and clear sounding, open.
  • High resolution: 24 or 32-bits, with some headroom.
  • No unnecessary use of low quality effects: no cheap reverb, no EQ being pushed exaggeratedly that will expose filter flaws, no weird M/S gimmicks.
  • Controlled transients: nothing that hurts the ears in any way.

You want to hunt down samples that not too short, because you want to be able to pick it’s length. You won’t need a sample that covers all frequencies—you’ll want to feel invited to layer multiple sounds all together without any conflicts or have one shelf of frequencies to be overly saturated.

When I listen to a lot of mixes, the first thing that I look for is the overall contrast between the sounds. If they lack contrast, they will be mostly mushed together and difficult to mix, and harder to understand.

In theory, a song is a big sound design experiment that is being assembled through the mix. If everything is on one axis, such as making everything loud, you lose the contrast and make your song one-dimensional.

How is Contrast Used in Arrangements?

If contrast in sound design is within one single sound, it’s through and entire song or section that we can approach contrast in arrangements. A song can have different sections—in pop, think “chorus”, “verse”, etc., which are very distinct sections that can be used in any context as moments through the song. You can move from one to another, and the more of a distinction between one another, the more contrast your storytelling will have.

Is this type of contrast essential? No, but it can engage the listener. This is why, for a lot of people, the breakdown and drop in electronic music is very exciting, because there’s a gap and difference and the experience to go from one to another, is intense and fun (especially on a big sound system).

In techno, linearity is a part of the genre because songs are usually part of a DJ set and made to be assembled and layered with other tracks, to create something new. Huge contrast shifts can be awkward, so it’s avoided by some—tracks emit contrast very slowly and subtly, instead of a sudden drastic change.

So, what makes a song interesting, to me, or to anyone, is the main idea’s content, based on the listener’s needs. What do I mean exactly?

  • A DJ might be looking for song of a specific genre and want its hook to match another songs he/she has.
  • Some people want to have a song that expresses an emotion to be able to connect with it (ex. nostalgic vibes).
  • Some other people might want to have some music similar to songs they like, but slightly different, while others, to be exposed to completely new ideas.

When I listen to the songs I work on, my first task is to quickly understand what the composer is trying to say/do. If the person is trying to make a dance-oriented, peak-time song, I’ll work on the dynamics to be able to match music of the same genre and make sure all rhythmic elements work all together.

The precision in the sound design is quite essential to convey a message, whatever it might be. Sometimes I hear a melody and because of the sample used, it makes me frown—a good melody but weird selection of sounds results in an awkward message.

It’s like you trying to impress a first date with a compliment/gift that doesn’t make sense—you wouldn’t tell someone his/her nose is really big…?!

The combination of good sound design and supporting your idea, is executed by arrangements. The whole combination of multiple sounds through a mix is what creates a piece.

Some examples of contrast use within arrangements could be:

  • Different intensity between sections, either in volume or density.
  • Different tones, emotions.
  • Changes in the time signature, or rhythm.
  • Changes in how sounds move, appear, or evolve.
  • Alternating the pattern, sequence, or hook, adding extra elements to fill gaps, holes, or silences.

One of the biggest differences between making electronic music 30 years ago and the present, was that back then, you’d make music with what you could find. Now we have access to everything, so how do you decide what to do when there are no limits?

I find that when you remove all technical limitations like sound selection from your session, you can focus on design and storytelling. Same goes for if you feel like you have managed to understand your technical requirements and now want to dig deeper—then you can start with contrast.

To summarize this, use contrast within a sound to give it life, either by slow or fast movements. Create contrast in your arrangements by having differences between sections of your song—play with macro changes vs. micro changes.

Tips to Keep a Loop Interesting for an Entire Song

To keep a song built mostly on a single loop interesting, we need to discuss how you work and your perceptions. I can’t just recommend technical bells and whistles that will solve everything. You need to think about how you see your music, and from there, there are certain things that I think can make a difference in helping to keep a listener engaged, even if your song is built around a single loop.

There are two main things you need to consider with regards to listener engagement when making a song:

  1. How someone listens to a song.
  2. How your song can engage the listener in his/her experience.

Meeting Your Listener’s Expectations

If you read this blog, you’ll know that this topic has been covered in other posts, so I won’t deeply go into this again but I’d like to remind you of a few key elements. The first and foremost important point here is to understand what you want to do in the first place. From the numerous talks I’ve had with clients, this is where many people get lost. To know what you want to do with a song has to be clear from the start.

Is a plan for a song something set that can’t be changed afterwards?

Of course you can change your mind, but this can open a can of worms, as the direction and vision of what you want to do becomes less clear. Music is about communicating some sort of intention.

When, in the music-making process, should you set your intention?

You don’t have to about your intention explicitly, of course, but doing so helps if you’re struggling with a lack of direction or when you feel you can’t reach goals. I find there are two important moments where setting an intention can provide significant benefits. The first is when you start a project—when you start a song, you can think of something somewhat general, such as “an ambient song” or “making a dance-floor track”; but the more precise you are, the more you are establishing some boundaries for your wandering mind. Many people don’t feel this approach helps and may skip this aspect of writing music, but for others, it can be a leveraged to maximize your efforts in what you do.

For instance, I often make songs without a precise goal because I just like to let things flow and to see how it’s been made affects the end-product. But when I’m asked to make an EP, I need to focus the results.

For me, for example, to meet my client’s expectations, I need to know what they want. It helps if they work in a specific genre or can reference an artist they like so I can help them deliver music that will appeal to people with similar tastes. When working with a clear intention, one needs to study how the music is made, more or less, in terms of variations, transitions, number of sounds, duration, tones, etc.

The objection I always get to this recommendation is “yes, but I want to have my own style.” I feel this a bit of a erroneous statement. We always are influenced by other artists and if you’re not, then you might have a problem in your hands: who are you making music for?

I know some people who make music for themselves, which is great. But when they tried to sell it or promote it, there was no way to know who it was for because we had no model to reference. Can you be original and still be heard? Yes, but I think a certain percentage of your songs need to have some sort of influence from a genre that people can relate to. For example, a very personable version of drum and bass, or house—then your music will fall under certain umbrella.

Meeting Your expectations and Your Listeners’ Expectations at the Same Time

The number one problem I hear is of the producer being bored of his/her own music, rather worrying that the listener might be bored, and that’s quite normal, considering the amount of time one can spend making music. Personally, I make my songs with a meticulous approach:

  • 1 idea, 2 supporting elements.
  • Percussion, limited to 5 elements maximum.
  • Bass.
  • Effects, textures, and background.

That’s it.

The main idea rarely evolves more than 2-3 times in a song. If it changes more frequently than that, you might want it to evolve on a regular, precise interval, i.e. changes every 2 bars.

When Writing Music, How Can You Keep a Single Idea Interesting?

I use design principles that are used in visual content and apply them to my music. If you learn about these principles for music-making, you’ll develop a totally new way of listening to music. In searching for these principles, you’ll see some variety, but generally these are the ones that usually come up:

Balance: This principle is what brings harmony to art. Translating this to music, I would say that, mixing wise, this could mean how you manage the tonal aspect of your song. If we think of sound design, it could be the number of percussion sounds compared to soft sounds, or bright vs dark. I find that balanced arrangements exist when there’s a good ratio of surprises versus expected ideas.

Contrast: Use different sources, or have one element that is from a totally different source than the others. This could be analog vs digital, acoustic versus electronic, or having all your sounds from modular synths except one from an organic source. If everything comes from the same source, there’s no contrast.

Emphasis: Make one element pop out of the song—there are so many ways you can do this! You can add something louder, or you could have one element run through an effect such as distortion, and so on. Emphasis in music is often related to amplitude, dynamic range, and variations in volume. In a highly compressed mix, it will be difficult to make anything “pop”.

Pattern: This is about the core idea you want to repeat in your song. It can also be related to the time signature, or an arpeggio. It could be the part you repeat in a precise or chaotic order.

Rhythm: This is the base of a lot of music in many ways, and this, to me, can directly refer to time signature, but it can also mean the sequence of percussion. You can have multiple forms of rhythm as well, from staccato, chaotic, robotic, slow-fast…it’s really one of my favourite things to explore.

Variety: This relates to the number of similar sounds versus different. This is a bit more subtle to apply in music compared to visual design, but a way I see this is how you repeat yourself or not in your arrangement. If you make a song evolve with no variety, you might lose the listener’s attention…same thing for if you have too much variety.

Unity: This is what glues a song together. To me, the glue is made from mixing, but there are things you can do that makes it easier, such as using a global reverb, some compression, a clean mixdown, same pre-amps (coloured ones) or a overall distortion/saturation.

To wrap this up, I can’t recommend to you enough to space out your music sessions, set an intention and pay attention to your arrangements. If you know what you want to achieve with your song, you can refer to a specific reference, and then build up your ideas using some of the design principles I have discussed in this post. Good luck!

Make Music Faster: Self-Imposed Limitations for Expanding Creativity

“I think we need to go backward now”, is what I said to a friend who was asking what was ahead for the year—referring to a view I had years back about recognizing when it’s time to go with the flow, and when it’s time to reverse or deflect it to move in another direction. I was thinking back to the mp3 revolution of 2001; geeks downloaded all the music they wanted thanks to Napster or other software. There was a continuous debate about music being copied and shared. Back then, it was mostly pop and commercial music taking the biggest hit from file-sharing. In underground culture, Netlabels became a mysterious movement, sharing music for free. Now free music is common, but back then it was really seen as a nonsense approach to a label, “backward thinking” even, and often talked down and ridiculed.

Back then, Dennis De Santis (who now works for Ableton) and I were approached to be part of a compilation for a German Netlabel called Thinner (which eventually became fairly well-known netlabel). Why did I do it? There were two main contributing factors:

  • I wasn’t putting releases out at that time, and I was a yes-man to whatever would come my way.
  • There was a huge new audience flow of people who wanted music for free…so why not just give it to them?

I decided to go with the flow. In doing this, you get pushed in a direction and accept that you might not control where you’ll end up. In my case, I’d say it only led me to great things—meeting people, getting gigs, and a lot of attention.

It was no surprise that when I started my own label, Archipel, in 2004, I kicked it off as a netlabel as well. But in 2006, I decided to go against the flow and do what many didn’t really approve of, which was sell music on Beatport. It was the beginning of digital music sales and many people thought it wouldn’t work, but it did really well.

My point is, there are times when it makes sense to keep going in a certain direction, and there are other times when changing directions is more sensible. Keeping this in mind, being flexible is something that can applied to many spheres, such as your music aesthetic, or even a song itself.

As I’ve mentioned, I recently joined Weeklybeats—a challenge to create one song per week, for the entire year, and I’ve experienced a great feeling of freedom. Normally, I impose a very rigorous workflow on myself when I make music, and often it can take me months to finish a song. Switching up my approach to a faster pace forced me to think less. Yes, there’s a risk of reduced quality with increased speed, but at the same time, with the experience I’ve gained over time, I know I can at least make sure that the production is solid.

I also realized that my number one distraction is that I’m constantly bombarded with new music tools promising tons of new features and spend a hell of a lot time going through them and waiting for a sale to buy them, but never really pushing the stuff I already own to its maximum potential. With this weekly challenge in mind, now that I have self-imposed limitations, I feel like I’ve exprienced a huge breakthrough.

Time

Deadlines make you creative and productive. A friend who is a father of two told me recently that he realized that he was creating his best ideas in moments where he’d squeeze a quick session of music, knowing that he’d be limited to maybe 10 minutes. So, let’s say he had to go to the grocery store; while people were getting ready, he’d open Ableton and would test a new macro he made, or would try to make temporary arrangements. The time-constraint made him more efficient than when he’d have a full evening to himself to make music, which often led to nothing interesting.

My theory is that with too much time, you can spoil what you make. This is why I think 5 hours of studio time spent on one song is not the best idea—a thought I have proven to be correct for myself while taking part in this weekly challenge. Now, I take a few hours to create an idea, save it, and later will expand it—the next day I add a layer, and so on. I’m limited in time and I do multiple things at once, but I’ll squeeze in 20 minutes here, 40 minutes there, then 10 minutes before going to bed.

Try this fun Max patch that will time your work and give you an idea of how much time you’ve spent on things.

Tip: Give yourself a due-date for wrapping up a song and accepting that it is what it is, once you hit it. It’s more important to move on than to try to reach some illusory perfection. Use your agenda alarm as a reminder.

I decide the length of my song before trying to speed things up. This is a tip discussed many times in the blog but I will insist that doing this is a strong limitation that clarifies a lot of things.

Tools

If you’re a reader of this blog, you’ll remember that for one song I encourage you to focus on one main idea supported by two minor ideas. It’s really easy to get lost trying to find an idea to start with. My take is to try to use what comes out fast.

Synths: Know what you have—cycle through synths that came with your DAW, and some that didn’t. I encourage people to get at least one synth that is an analog emulation of a classic model (Arturia does a great job at these) and another that is focused on a wide range of sound design options (I’m a big fan of Rob Papen and encourage you to test his products).

Samplers: Honestly, Ableton Live’s Sampler does the job for me. There are a few more alternatives out there but in the end, they all do a similar job except some have more bells and whistles. I always come back to the stock sampler because it’s simple and extremely versatile.

Once you have decided if you’ll generate a sound or use a sample, it’s time to play with it. Mapping a MIDI controller is very useful for playing different notes. Sometimes I see people in front of their keyboard and they are not sure what to do. This might sound obvious but when jamming, I test:

  • different pitches by playing higher and lower notes.
  • harder or softer hits to see how the velocity influences things.
  • listening to the sound a different volume. Sometimes a sound at very low volume is much more interesting than loud.
  • alternating between short and long notes. Depending on your preset, it can play differently.
  • playing fast and slow notes to see how they feel.

Keep in mind that you can make a song out of any sound if you how to use it. The reason why we discard sounds is because we’re after something else. We’re not paying attention to the sound and its potential. Limiting yourself of only one tool per song eliminates a lot of exploration time. It also forces you to do something with what you have.

Same goes for reverb, compression and EQ. I’ll only use one or two, max. When I’m in mix mode, I usually explore different compressors.

Composition

If you use a modular, or hardware, you have your gear in front of you and you’ll just start working with what you have. This limitation forces you to be creative. But on a computer, you’ll have many ways to make music.

Templates. To speed up my work, I created a main template that I use to create macros and techniques, while recording everything. I mostly jam and will not spend too much time going into detail—raw on purpose. When I have something potentially interesting, I make a channel called “ideas” and put my clips in it. Later, when I start working on a song, from the left side browser, I can open the template and import the “ideas” channel in my new song to select from it. Have multiple templates that you import your sounds to, and in that other template, create sound modifiers. For instance, I have a dub template filled with tons of reverb modulators and delays. I can drop anything through it and something dubby will emerge.

Jam. I try to invite people to jam their song as much as possible. Whenever I have a loop as a main idea, I’ll automatically start recording and will mute it, play it, change volume and try different combinations. This lets me explore ideas I couldn’t discover if I just mouse-edit the clips in arrangements.

Sound

For the longest time, we wanted to have access to as many samples as possible, but now that we have them, we’re completely lost. Try to decide which snare or clap you want. Swapping out a sound isn’t super easy but I found this amazing step sequencer that fixed this problem. It’s made by XLN and it’s called XO.

If you want to make music quickly, you need to find your favourite sounds and create drum kits. Import them whenever you start a new song. Back in the day you’d have a 909 or a 808, and that would be your drum kit, end of story. So create a good main kit, then add a few different ones, and that’s it.

And for crying out loud, stop thinking that you need to do everything from scratch, all the time! Yes, it’s cool, but it slows you down a lot.

I mentioned that I’d “go backwards” this year. What I meant by that is that all my habits have to be upgraded or changed. Habits keep me safe and comfortable, while feeling uneasy forces me to be creative and think outside-the-box. Join me in this approach; I’m sure there’s magic waiting for you too!

SEE ALSO : Reverb Tips to Boost Your Creativity

Creating a music sketch

In this post, I’d like to explain how making a music sketch can help you to stay on track when creating a song or track, much like how a painter creates an initial sketch of his/her subject. I’ve explained in previous posts that the traditional way of making music goes something like this:

  1. Record and assemble sounds to work from.
  2. Find your motif.
  3. Make and edit the arrangements.
  4. Mix.

Here we’re talking about a way of making music that was popularized in the 1960s and is still used frequently today. But what happens when you have the ability to do everything yourself, and from your computer alone? Can you successfully tackle all of these tasks simultaneously?

When I do workshops, process and workflow are generally questionable topics to address because everyone has different point of view and way of working. However, to me it always comes down to one thing—how productive and satisfied an artist is with his or her finished work. Satisfaction is pretty much the only thing that matters, but I often see people struggle with their workflow, mostly because they keep juggling between different stages of music-making and get lost in the process (sometimes even losing their original idea altogether). For example, an artist might start working with an initial idea, but then get lost in sound design, which then leads them to working on mixing, and then sooner or later the original idea doesn’t feel right anymore. For some people, perhaps its better to do things one at a time; the old before-the-personal-computer way still works. But what if breaking your workflow into distinct stages still doesn’t work? Is there another alternative approach?

In working with different artists and making music myself, I’ve come to a different approach: creating a music sketch—a take on the classic stage-based process I just mentioned. Recently, this approach has been giving me a lot of good results—I’d like to discuss it so you can try it yourself.

Sketching your songs and designs

I completed many drawing classes in college because I was studying art. If you observe a teacher or professional painter working, you’ll see that when they create a realistic painting of a subject, they’ll use a pencil first and sketch it out, doodling lines within a wire-frame to get an idea of where things are. Sketching is a good way to keep perspective in mind, and to get an idea of framing and composition. The same sketching process can be used in music-making.

When I have an idea, I like to sketch out a “ghost arrangement”. Sometimes I even sketch out some sound design. The trap a lot of people fall into when making a song—particularly in electronic music—is to strive to create a perfect loop right from the start. Some people get lost in the process easily which is, honestly, really not important. People work on a “perfect loop” endlessly in the early stages of making a song because when you are just starting a song, the loop will have no context and it will be much more difficult to create something satisfying. By quickly giving your loop a context through a sketch-type process by arranging or giving the project a bit more direction, you’ll hear what’s wrong or missing.

I’m of the belief that having something half-done as you’re working can be acceptable instead of constantly striving for perfection. I think this way because I know I’ll revisit a song many times, tweaking it a little more each time.

Sketching a song can be done by understanding at the beginning of the process that you’ll work through stages of music-making more quickly and roughly, knowing you’ll fix things later on. This is more in line with how life actually goes: we live our lives knowing some problems will get solved over time, and that there are many things we don’t know at a particular moment in time. In making music, some people become crazy control freaks, wanting to own every single detail, leading them down rabbit hole of perfectionist stagnation, in my opinion.

Creating a sketch in a project is simple. Since I work with a lot of sound design, I usually pick something that strikes a chord in me…awakens an emotion somehow. Since this will be my main idea, next I’ll try to decide how it will be use as a phrase in my song. In order to get that structured, I need to know how the main percussion will go, so I’ll drop-in a favourite kick (usually a plain 808) and a snare/clap. These two simple, percussive sounds are intentionally generic because I will swap them out during the mixing process. You want just a kick in there to have an idea of the rhythm, and the snare clarifies the swing/groove.

Why are the basic kick and snare swapped out later?

I swap out the snare and kick later because I find that I need my whole song to be really clear before I can decide on the exact tone of a kick. A kick can dramatically change the whole perspective of a song, depending on how it’s made. Same thing goes for a snare—it’s rare I’ll change the actual timing of the samples, but the sound itself pretty much always changes down the line.

For the rest of the percussion, I’ll sketch out a groove with random sounds that may or may not change later on, but I use sounds I know are not the core of my song.

With bass, I usually work the same way; I have notes that support the main idea but the design/tone of the bass itself has room to be tweaked later.

As for arrangements, when creating a music sketch I will make a general structure as to what goes where, when some sounds should start playing or end, and will have the conclusion roughly established.

Design and tweak

Tweaking is where magic happens—this is where, in fact, a lot of people usually start their music-writing process. Tweaking and designing is a phase where you clarify your main idea by creating context. I usually work around the middle part of the song; the heart of the idea, then work on the main idea’s sound design. I layer the main idea with details, add movement and velocity changes.

  • Layering can be done by duplicating the channel a few times and EQing the sub-channels differently. Group them and add a few empty channels where you can add more sounds at lower volume.
  • Movement can imply changes in the length of the sound’s duration (I recommend Gatekeeper for quick ideas), panning (PanShaper 2 is great), frequency filtering, and volume changes (Check mVibratoMB for great volume modulation). The other option is to add effects such as chorus, flanger, phaser, that modulate with a speed adjustment. Some really great modulators would be the mFlangerMB (because you can pick which frequency range to affect—I use this for high pitched sounds), chorus (mChorusMB) to open the mids, and phasers (Phasor Snapin) for short length sounds. Another precious tool is the LFO by XFER—basically you want the plugin to have a wet/dry option and keep it at a pretty low wet signal.
  • Groove/swing. This is something I usually do later—I find that adjusting it in the last stretch of sketching provides the best results. The compression might need to be tweaked a bit, but in general the groove becomes much easier to fix once everything is in place.
  • Manual automation. Engineers will tell you that the best compression is done by hand, and compressors are there for fast tweaks that you can’t do. Same for automation, I find that to be able to make your transition and movement using a MIDI controller is a really nice finishing touch that is perfect in this stage.

Basically, the rule of finalizing design is that whatever was there as a sketch has to be tweaked, one sound/channel at a time. Don’t leave anything unattended—this can manifest from a fear of “messing things up”.

When tweaking specific sounds from the original sketch, you should either swap out the original sound completely, or layer it somehow to polish it. I always recommend layering before swapping. I find that fat, thick samples are always the combination of 3 sounds, which make it sound rich. When I work on mixing or arrangements for my clients and I see the clap being a single, simple layer, I have to work on it much more using compression, sometimes doubling the sample itself, which in the end, gives it a new presence. Doubling a sound—or even tripling it—gives you a lot more options. For example, if you modulate the gain of only one of the doubles, you not only make the sound thicker but also give it movement and variation.

All this said, I would recommend making sure your arrangements are solid before spending a lot of time in design. Once you start designing, if your arrangements have a certain structure, you’ll be able to design your song and sounds specifically according to each section (eg. intro, middle, chorus, outro) which gives your song even more personality. Sound design completed after a good sketch can be very impactful when the conditions are right.

Try sketching your own song and let me know how it goes!

SEE ALSO : Creating Timeless Music

Using Modular Can Change the Way You View Music Production

Are “sound design” and “sequencing” mutually exclusive concepts? Do you always do one before you do the other? What about composition—how does that fit in? Are all of these concepts fixed, or do they bend and flex and bleed into one another?

The answers to these questions might depend on the specific workflows, techniques, and equipment you use.

Take, for example, an arpeggiator in a synth patch. There are two layers of sequencing to produce an arpeggio: the first layer is a sustained chord, the second layer is the arpeggiator. Make the arpeggiator run extremely fast, in terms of audio rate, and we no longer have an audible sequence made up of a number of discrete notes, but a complex waveform with a single fundamental. Just like that, sequencing has become sound design.

These two practices—sequencing and sound design—are more ambiguous than they seem.

Perhaps we only see them as distinct from each other because of the workflows that we’re funneled towards by the technologies we use. Most of the machines and software we use to make electronic music reflect the designer’s expectations about how we work: sound design is what we are doing when we fill up the banks of patch slots on our synths; sequencing is what we do when we fill up the banks of pattern slots on our sequencers.

The ubiquity of MIDI also promotes the view of sequencing as an activity that has no connection to sound design. Because MIDI cannot be heard directly, and only deals with pitch, note length, and velocity, we tend to think that that’s all sequencing is. But in a CV and Gate environment, sequencers can do more than sequence notes—they can sequence any number of events, from filter cutoff adjustments to clock speed or the parameters of other sequencers.

Modular can change the way you see organized sound

Spend some time exploring a modular synthesizer and these sharply distinct concepts quickly start to break down and blur together.

Most people don’t appreciate how fundamentally, conceptually different CV and gate is to MIDI. MIDI is a language, which has been designed to according to certain preconceptions (the tempered scale being the most obvious one). CV and gate, on the other hand, are the same stuff that audio is made of…voltage, acting directly upon circuits with no layer of interpretation in between. Thus, a square wave is not only an LFO when slowed down, or a tone when sped up, but it is also a gate.

What that square wave is depends entirely on how you are using it.

You can say the same thing about most modules. They are what you use them for.

Maths from MakeNoise. It’s a modulator. No, it’s a sound source. No, it’s a modulator.

To go back to our original example: a sequencer can be clocked at a rate that produces a distinct note, and that clock’s speed can itself be modulated by an LFO, so the voice that the sequencer is triggering goes from a discrete note sequence, to a complex waveform tone, and back again. The sound itself goes from sequence to sound effect and back to sequence…

Do you find this way of looking at music-making productive and enjoyable, or do you prefer to stick to your well-trodden workflows? Does abandoning the sound design – sequencing – composition paradigm sound like a refreshing, freeing change to you? Or does it sound like a recipe for never finishing another track ever?

SEE ALSO : “How do I get started with modular?”

Are Music Schools Worth The Investment?

Whether or not music schools are worth the money might spur a heated debate—schools worldwide might not like what I’m about to say, but I think that this topic needs to be addressed. What’s outlined in this post is based on my personal experience(s); I invite anyone who want to discuss this topic further, to contact me if necessary.

Music schools: an overview

Many people over the last few years have been asking me about my opinion regarding enrolling in music production schools. There are many production and engineering schools in the world, and a lot of them ask for a lot of money to attend. In Montreal, we have Musitechnic (where I have previously taught mastering and production) and Recording Arts. Most major cities around the world have at least one engineering school and if not, people can still study electro-acoustics at Universities. University takes at least 3 years to get a degree; most private schools will condense the material over 1 year. During that time, the physics of sound will be studied, mixing, music production in DAWs, recording, and sometimes mastering. While each of these subject usually take years to really master, the introduction to each can be very useful as you’ll learn the terms and logic of how these tasks work and what they are for.

If the teachers are good at explaining their topic(s) and have a solid background, there’s nothing quite like being in the presence of someone with a great deal of experience, not only for the valuable information they provide, but also, the interpersonal context. Having a good teacher will pay off if you ask questions and are curious. While I don’t teach at Musitechnic anymore, some of my past students are still in contact with me and ask me questions—I even hired some for internships. I’ve often been told by many students that they remembered more from hearing about their teacher’s experience(s) than the class content or material.

One issue with audio teachers I hear about a lot is that many times, teachers might be stuck in a specific era or on a precise genre, which might be difficult for a student to relate to; there might be a culture clash or a generation gap between themselves and the teacher.

For instance, if a school has teachers who are from the rock scene, many people who are interested in electronic music or hip hop will have a really hard time connecting with them. Similarly, sometimes the teachers who make electronic music can even be from a totally different sphere as well, and mentalities and approaches can clash.

The advantages of attending a school or program

There are, however, many beneficial outcomes from attending a music school:

  • you’ll get a solid foundation of the understanding of audio engineering, and get validation from experts.
  • you’ll end up getting a certificate that is recognized in the industry.
  • you’ll have access to resources, equipment and experienced teachers that you might not otherwise find.

The main issue I have with some music schools is how they sell “the dream”, in most cases. The reality of the music industry is really harsh. For instance, a school might tell students that when they graduate, they can open a studio or work for one. While after graduating you might have some skills and experience that you didn’t have before, nothing guarantees that people will come to you to have their music mixed. That said, getting your first client(s) will eventually bring in other clients and opportunities.

“What’s the best way to get a full time job in the music industry or to become an engineer?” I’m often asked, and I’m very careful about how I answer this question. I described my thoughts on finding full-time work in the music industry in a previous post, but I’ll share some points about this topic again here and how it relates to music schools:

  • Whatever anyone tells you or teaches you, even if you applied what they say to the finest level of detail, it’s likely that things still won’t work out the way you envision them. I know this sounds pessimistic, but the reality is that no path will provide the same results for anyone else in the music/audio world.
  • The industry is constantly changing and schools aren’t always following fast enough. If you want to make things work, you need to make sure that you can teach yourself new skills, and fast—being self-sufficient is critical to “make it” out there.
  • Doing things and learning alone is as difficult as going to school, but will be less expensive. The thing a school will provide is a foundation of knowledge that is—without question—valuable. For instance, the physics of sound won’t change in the future (unless one day we have some revolutionary finding that contradict the current model; this is not going to come in anytime soon).
  • Clients don’t always care where you’re from or what your background is, as long as they get results they like. Your reputation and portfolio might speak more for itself than saying you went to “School of X”. Where schools or your background can be a deal-breaker though, is if you apply to specific industries, such as video game companies, and maybe you already have some experience with the software they use—companies will see that as a bonus. But I know sound designers for some of those companies who’ve told me that your portfolio of work matters more. For instance, one friend told me that they really like when a candidate takes a video and then completely re-makes the audio and sound design for it; this is more important than even understanding specific software which can always be learned at a later time.
  • The most important thing is to make music, daily, and to record ideas, on a regular basis. Finishing songs that are quality (see my previous post about getting signed to labels) and having them exposed through releases with labels, by posting them on Youtube channels, self-releasing on Bandcamp, or filling up your profile on Soundcloud can all be critical to reaching potential clients. One of the main reasons I am able to work as an audio engineer and have my own clients is mostly due to the reputation as a musician I built a while ago. I often get emails of people who say they love my music and that was one of the main reasons they want their music to be worked by me specifically. Not many schools really teach the process of developing aesthetics (i.e. “your sound”) or the releasing process. While some do, both of those topics also change quickly, and you need to adapt. I’ve been feeling like every 6 months something changes significantly, but knowing some basics of how to release music certainly helps.

Would I tell someone not to attend a music school?

Certainly not. Some people do well in a school environment, and similarly, some people don’t do well at all on their own. So knowing where you fit most is certainly valuable in your own decision-making about schools. Perhaps a bit of both worlds would be beneficial.

Will a school get you a job in the audio world?

Absolutely not—this is a myth that I feel we need to address. It’s not okay to tell this to students or to market schools this way; it would be as absurd as saying that everyone who graduates from acting schools will find roles in movies and make a living from acting.

What are the alternatives to music schools?

If you don’t think music school is for you—because you don’t have the budget for it, or you’re concerned about the job market after, or even because you’re not someone who can handle class—there are still other options for you:

  • Take online classes. This is a no-brainer because there are a huge number of online classes, courses, and schools online, and you can even look for an international school. You can also work on classes during a time that fits into your schedule. This means you can invest some of your time off from work into it. Slate Digital has some nice online classes, as well as ADSR.
  • Become a YouTube fiend. YouTube has a lot of great content if you’re good at finding what you need. You can create a personal playlist of videos that address either a technique or a topic that is useful. There are also videos where you see people actually working, and they’re usually insightful.
  • Get a mentor. People like myself or others in the industry are usually happy to take students under their wing. While you can find most information online, one advantage of having a mentor is to speed up the search for precise information. How can you learn a precise technique for a problem if you don’t even know what it is? Well, someone with experience can teach you the vocabulary, teach you how to spot a specific sound, and teach you how to find information about it. “How do they make that sound?“, I sometimes hear, as some stuff feels magical to students until I explain that it’s a specific plugin. In my coaching group, we even have a pinned topic where we talk about certain sounds and how they’re made.

I hope this helps you make your own judgments about music schools!

SEE ALSO : On Going DAWless

Taking breaks from music-making

It’s strange how some topics seem to pop up in the music world again and again, both online and in person—taking breaks from music being one of them. During the summer in Canada most people—including musicians—don’t want stay indoors as much. Many musicians seem to get FOMO this time of year because they’re not making music. Other people I know are hit by writer’s block (including myself), and some people have asked me if I think music-making should be a daily routine or not. While I love this topic, there are multiple ways to approach music production routines and taking breaks from music; I’m sharing some of my own views here, which are based on my experience.

Taking breaks as you work

This usually surprises a lot of people, but when I work on production or mixing, I take a lot of breaks. I often notice that even after just 10 minutes of working hard, you can lose track of the tone of your song. You get used to what “works”, but the low end or the highs might be too much and you can’t tell because you’ve lost perspective. Even volume can be difficult to assess when your ears are fatigued; you might be playing too loud and not realize it.

Taking a 10-second or so break every 10-15 minutes can prevent fatigue and will help restore your understanding of your song.

If you’re in a creative mood and want to do more, I would strongly recommend taking a break after 1-hour to test the true potential of your music. If you’re familiar with this blog, you probably aren’t surprised to read that I recommend to actually stop working on a particular song after an hour and work on another one instead, or even do something completely different.

Taking breaks and making new songs

Sometimes you’ve made a bunch of songs and you feel like you’re repeating yourself, or worse, everything feels annoying (red flag: writer’s block ahead). Some people feel they need to take a break and not open their DAW at all for a while. Is that a good idea?

Yes and no.

My studio is in a building in Montreal that also houses other studios as well, with all kind of musicians. The ones that impress me the most are the jazz and classical musicians. They have a very, very intense schedule for practicing. In talking with them, they say that skipping just one day of practice has an impact on how they master their instrument(s). I can relate; when I take time off over a 3-day weekend, on the Monday I am a bit slower to figure out which tool works best for a specific situation. If I work on music, it takes me a bit more time to problem solve. In a way, I have to agree with the jazz and classical musicians here even though our music worlds are quite different.

The difference between me—as an audio engineer and electronic musician—and classical and jazz musicians, is that I’m constantly working in a space in which I need to invent new ideas, as opposed to practicing something over and over to master it. For my live sets and productions, I do rehearse and play my music—my workflow isn’t just mastering mouse-clicking around a screen. I humanly intervene by using MIDI controllers, mixing by hand, and when working on sound design I’ll also play with knobs too to create new ideas. I see creativity as a muscle that needs to stay fit to be powerful, but if you’re going to gym regularly, you know muscles also need rest in order to grow.

My conclusion on taking breaks from music is this: I think it’s important to work on audio-related tasks daily in order to stay focused, but when it comes to creating new ideas, creativity is not something that can be forced—it needs to come by itself, naturally. Whenever I push myself too hard to force an idea to come to life, it sounds wrong. The best ideas are spontaneous, often invented quickly, and done without much shaping.

So what does this mean for the musician?

Consider taking long breaks if you have really negative feelings towards what you do, or if you don’t feel good about making music. When taking time off from pursuing your own music creatively, what are some of the other alternatives and things you can do when you need downtime from working on your own songs?

Sound design. Try to see if you can spend time creating one sound you like from scratch, i.e. a pad.

Learn production techniques. You can register with online classes to learn something new; ADSR is full of examples with low prices.

Explore presets. Each effect or instrument you have has presets. You now have time to explore everything. The strength of knowing how many presets sound helps to be able to quickly access a specific aesthetic when needed.

Create templates. Have you considered creating a template for Ableton? I have multiple templates for sound design, mixing, jamming as well as song structure templates to play with.

Build macros. Use multiple effects and assign them to some knobs to see how you can alter sounds quickly.

Sample hunting. So many sites exists for finding samples, but finding time to shop is rare. You can do that now.

Build new references. If you don’t have a folder with reference tracks in it, it’s time to start, and if you do, add new ones. A good way is to make reference playlists on Soundcloud or YouTube.

Try demos and sample them. I love getting a bunch of VST/AU demos to try out and then sampling them. Eventually I get to know which new virtual synth or effect I really like.

Re-open projects that have been pending or recycle them. You might have unfinished songs and sometimes they are a good place to scavenge for samples or ideas to use in other songs.

Revisit past projects you’ve worked on and liked to remind yourself of methods you used that worked. Whenever I feel I need a break but still want to spend some time on music, I go through past projects to see how I worked and what could have been done better—I always learn something from revisiting old work.

All that said, most importantly, when you take a break from music, do not sell any gear or buy anything new. Just wait. If you like music and making it, chances are high that you’ll be doing it for years to come. Sometimes we need a break, but breaks don’t mean you have to give up completely. The feeling of needing a break is temporary—even if it’s a long break—but your love of music is permanently with you.

SEE ALSO : Are Music Schools Worth The Investment?

Using MIDI controllers in the studio

People often say that MIDI controllers are mostly for performing live, but they can also be your studio’s most useful tool. My advice to people who want to invest in gear—especially those who aren’t happy working only on a computer and dream of having tons of synths (modular and such)—is to start with investing in a controller first.

There are multiple ways to use MIDI controllers; let me share some of my favourite techniques with you and give you advice to easily replicate them.

Controllers for performing in studio

One trend I’ve been seeing in the last few months is producers sharing how they perform their songs in-studio as a way to demonstrate all the possibilities found within a single loop. This is not new—many people like to take moments from live recordings and edit them into a song, but it’s becoming clear that after years and years of music that has been edited to have every single damn detail fixed, artists are realizing that this clinical approach to producing makes a track cold, soulless, robotic, and not organic sounding and in the end. If you’re still touching up details at version 76 of your song, this means you’ve probably heard it about 200 times—no one will ever listen to your track that many times. My advice is to leave some mistakes in the track, and let it have a raw side to it. Moodymann’s music, for example, is praised and in-demand because his super raw approach makes electronic feel very organic and real. Performing your music in studio to create this type of feeling is pretty simple; it’s super fun and it inspires new ideas too.

For in-studio jams, I recommend the Novation LaunchXL which has a combination of knobs and sliders, plus it’s a control surface; depending on where you are on the screen, it can adapt itself. For instance, with the “devices” button pressed, you can control the effects on a specific channel and switch the knobs to control the on-screen parameters.

When I make a new song using a MIDI controller, I’ll start by using a good loop. Then I’ll use my controller to quickly play on the different mixes I can create with that loop. Sometimes, for example, I want to try the main idea at different volumes (75%/50%/25%), or at different filter levels. Some sounds feel completely different and sound better when you filter them at 75%. Generally, I put on these effects on each of my loops: a 3-band EQ, filter, delay, utility (gain), and an LFO.

Next, I’ll record myself playing with the loop for a good 20 minutes so that I have very long stems of each loop. Then when it comes to arranging, I’ll pick out the best parts.

TIP: I sometimes like to freeze stem tracks to remove all effects and have raw material I can’t totally go back and fix endlessly.

Controllers for sound design

I find that the fun part of sound design involving human gestures comes from replicating oscillations a LFO can’t really do. It’s one thing to assign a parameter to a LFO for movement, but if you do it manually, there’s nothing quite like it—but the best part is to combine the best of both automated and human-created movements.

I use a programmed LFO for super fast modulation that I can’t do physically with my fingers, and then adjust it to the song’s rhythm or melody—just mild adjustments usually. For instance, you could have super fast modulation for a resonance parameter with an LFO or with Live’s version 10.1’s curves design, then with your controller, control the frequency parameter to give it a more organic feel.

Recently, I’ve been really enjoying a complementary modular ensemble for Live called Signal by Isotonik; it allows you to build your own signal flow to go a bit beyond the usual modules that you’ll get in Max for Live. Where I find Signal to be a huge win is when it’s paired with PUSH, which is by far the best controller you can get for sound design. PUSH gives you quick access to the different parameters of your tools, and if you make macros it becomes even more organized.

Controllers for arrangements

Using MIDI controllers in arrangements is, to me, where the most fun can come from; using them can completely change the idea of a song.

For instance, if your song has a 3-note motif that has the same velocity across the board, I love to modulate the volume of the 3 notes into different levels. When we speak, all the words we use in a sentence have different levels and tones. For example, if you say to someone “don’t touch that!”, depending on the intonation of any particular word, it can change the emphasis of what you’re saying. “DON’T touch that!” would be very different from “don’t touch THAT!” This same philosophy can apply to a 3-note melody; each note is a word and you can decide on which ones to emphasize and how a certain emphasis fits in your song’s main phrase or motif.

If you assign a knob or fader on your controller to the volume of the melody, you can also control the amplitude of each note. You can do this for the entire song, or you can copy the best takes and apply their movement to the entire song. I find that there will be a slight difference in modulation depending on if you use a knob or fader; each seem to have a different curve—when I play with each, they turn out differently (but perhaps that’s just me). Explore and see for yourself!

TIP: Using motorized faders can be a a huge game changer. Check out the Behringer X-Touch Compact.

Another aspect of controllers that people don’t often consider are foot pedals. If you’re the type who taps your foot while making music, you could perhaps take advantage of your twitching by applying that to a specific parameter. Check the Yamaha FC4A. Use it with PUSH and then you have a strong arsenal of options.

SEE ALSO : Equipment Needed to Make Music – Gear vs. Experience vs. Monitoring

Workflow Suggestions for Music Collaborations

One of the most underestimated approaches to electronic music is collaboration. It seems to me that because of electronic music’s DIY approach people believe they need to do absolutely everything themselves. However, almost every time I’ve collaborated with others I hear them say “wow, I can’t believe I haven’t done that before!” Many of us want to collaborate, but actually organizing a in-person session can be a challenge. In thinking about collaboration and after some powerful collaboration sessions of my own, I noted what aspects of our workflow helped to create a better outcome. I find that there are some do’s and don’ts in collaborating, so I’ve decided to share them with you in this post.

Have a plan

I know this sounds obvious, but the majority of people who collaborate don’t really have a plan and will just sit and make music. While this works to some degree, you’re really missing out on upping the level of fun that comes out of planning ahead. I’m not talking about big, rigid plans, but more so just to have an idea of what you want to accomplish in a session. Deciding you’ll jam can be plan in-itself, deciding to work on an existing track could be another, or working on an idea you’ve already discussed could be a more precise plan.

Personally, I like to have roles decided for each person before the session. For example, I might work on sound design while my partner might be thinking about arrangements. When I work with a musician, I usually already have in mind that this person does something I don’t do, or does it better that I can. The most logical way to work is to have each participant take a role in which they do what they do best.

If you expect yourself to get the most of sound design, mixing, beat sequencing, editing, etc., all at once, you’re probably going to end up a “Jack of all trades, master of nothing”. Working with someone else is a way to learn new things and to improve.

A good collaborative session creates a total sense of flow; things unfold naturally and almost effortlessly. With that in mind, having a plan gives the brain a framework that determines the task(s) you need to complete. One of the rules of working in a state of flow is to do something you know you do well, but to create a tiny bit of challenge within it.

Say “yes” to any suggestions

This is a rule that I really insist on, though it might sound odd at first. Even though sometimes an idea seems silly, you should say yes to it because you’ll never know where it will lead you unless you try it. I’ve been in a session where I’ve constantly had the impression that I was doing something wrong because we weren’t following the “direction” of the track I had in my head. But what if veering off my mental path leads us to something new and refreshing? What if my partner – based on a suggestion that made have seemed wrong at first – accidentally discovered a sound we had no idea would fit in there?

This is why I find that the “yes” approach is an absolute win.

Saying yes to everything often just flows more naturally than saying no. However, if the “yes” approach doesn’t work easily, don’t force it; it’s much better to put an idea aside and return to it another day if it’s not working.

Trust your intuition; listen to your inner dialogue

When you work with someone else, you have another person who’s also hearing what you’re hearing, and will interact with the same sounds and try new things. This new perspective disconnects you from your work slightly and gives you a bit of distance. If you pay attention, you’ll notice that your inner dialogue may go something like “oh I want a horn over that! Oh, lets bring in claps!” That inner voice is your intuition, your culture, and your mood, throwing out ideas; sharing these ideas with one another can help create new experiments and layers in your work.

Combining this collaborative intuition with a “yes” attitude will greatly speed up the process of completing a track. Two people coming up with ideas for the same project often work faster and better than one.

Take a lot of breaks

It’s easy to get excited when you’re working on music with another person, and when you do, some ideas might feel like they’re the “best new thing”, but these same ideas could actually be pretty bad. You need time away from them to give yourself perspective; take breaks. I recommend pausing every 10 minutes. Even pausing for a minute or two to talk or to stand up and stretch will make a difference in your perceptions of your new ideas.

Centralize your resources

In collaborating, when you reach the point of putting together your arrangements, I would say that it’s important to have only one computer as the main control station for your work. Ideally you’d want an external hard-drive that you can share between computers easily; this way you can use everyone’s plugins to work on your sounds. One of the most useful things about teaming up with someone else is that you get access to their resources, skills, materials, and experience. Make sure to get the most out of collaborating by knowing what resources you can all drawn upon, and then select a few things you want to focus your attention on. It’s easy to get distracted or to think you need something more, but I can tell you that you can do a lot with whatever tools you have at that moment. Working with someone else can also open your eyes to tools you perhaps didn’t fully understand, were not using properly, or not using to their full potential.

Online collaboration is different

Working with someone through the internet is a completely different business that working together in-person. It means that you won’t work at the same time and some people also work more slowly or more quickly than yourself. I’ve tried collaborating with many people online and it doesn’t always work. It takes more than just the will of both participants to make it work, it demands some cohesion and flexibility. All my previous points about collaborating in-person also apply to collaborating online. Assigning roles and having a plan really helps. I also find that sharing projects that aren’t working for me with another person will sometimes give them a new life.

If you’re a follower of this blog, you’ll often read that one of the most important things about production that I stress is to let go of your tracks; this is something very essential in collaborating. I usually try to shut-off the inner voice that tells me that my song is the “next hit” because thinking this way usually never works. No one controls “hits”, and being aware of that is a good start. That said, when you work with someone online, since this person is not in the room with you and he/she might work on the track while you’re busy with something else, I find works best to be relaxed about the outcome. This means that if I have a bad first impression with what I’m hearing from the person I’m working with, I usually wait a good 24h before providing any feedback.

What if you really don’t like what your partner is making?

Not liking your partner’s work is probably the biggest risk in collaborating. If things are turning out this way in your collaboration, perhaps you didn’t use a reference track inside the project, or didn’t set up a proper mood board. A good way to avoid problems in collaboration is to make sure that you and your partner are on the same page mentally and musically before doing anything. If you both use the same reference track, for example, it will greatly help to avoid disasters. If you don’t like a reference track someone has suggested, I recommend proposing one you love until everyone agrees. If you and your partner(s) never agree, don’t push it; maybe work with someone else.

The key to successful collaborations is to keep it simple, work with good vibes only, and to have fun.

SEE ALSO : Synth Basics

Using Quad Chaos

I’m proud to announce the release of our first patch – Quad Chaos. I met Armando, the programmer, on the Max/MSP group on Facebook and his background was exactly what I was looking for and we got along very well. Quad Chaos is basically a patch version of what this blog is about: finding ways to have innovative sound design through modulation and chaos.

Speaking of chaos, the only “rule” for using Quad Chaos is to resample everything you do, because we intentionally wanted it to be something that works ephemerally; something you can’t really control and just have to go with. There are many tools out there you can use to do anything you want, but we wanted to create something experimental that can be fun and creative at the same time.

Make sure these knobs are up!

The first thing that appears when you load up Quad Chaos is a screen in which you can add up to four samples. If you hear nothing when you load in a sound, you probably need to raise the volume, direction, or panning. In the demo video, Armando has used short samples, but I find that the magic truly comes together when you load up longer files such as field recordings, things that are four bars long, or even melodic content. I don’t really find that Quad Chaos works well if you load a sample that has multiple instruments in it, but I still need to explore it more and I could be wrong about that. My advice is to start with one sample that you load into Quad Chaos, and then with your mouse, highlight a portion of it. Personally, I like to start with a small selection based on the waveform content I see. I’ll try to grab one note/sound, along with some silence. Once you make a selection, you’ll hear a loop playing that might sound like something in a techno track…but this is just the beginning.

While it’s very tempting to load in all four samples at once, if you do things this way, Quad Chaos will get out of control quickly; I like to start with one layer and then build from there.

Once you isolate a section that loops to your taste, it’s time to engage the modulation. One trick that I like to do with any synths or gear is to move one knob to its maximum and then minimum, quickly then slowly, to simulate what an LFO could do. When I find something I like, then I’ll assign an LFO or envelope to it and start my tests.

For example, in Quad Chaos you can assign the first modulator to a direction; you click on “dir” and you’ll see numbers underneath, which represent the modulation source. To access to the modulation section, use the drop down menu and pick “mod” and you’ll see the first modulation.


Depending on how you set it up, you’ll start hearing results as your sound now has modulation on and in full effect. I know the lack of sync in the plugin might seem odd, but to repeat myself, a lack of sync is needed to create “chaos” and this approach gives more of an analog feel to what you make; you can get some pretty polyrhythmic sequences because of this as well.

As I mentioned earlier, how I start my sound is usually just with an LFO set to sine curve and the I explore slow/fast oscillation to see what kind of results I get. I’ll find a sweet spot somewhere in the middle or something, then I’ll try all the different oscillations to hear other results. I’m very much into the random signal just because it will create the impression of constantly “moving” sonic results. Afterwards, I have a lot of fun scrolling through the recorded results of these experiments and then from them I pick one-bar loops/sections. I find that the random signal is always the one that gives me pretty interesting hooks and textures.

Once you’re happy with the first layer you’ve created with the first loop, you can use the other loops to create complex ideas or simply to add a bit of life to the first one. I’ve seen a few artists using Quad Chaos already and everyone seems to comes up with really different use-cases and results. One thing I often see is people dropping some important samples of a production they’re currently working on into the plugin to get some new ideas out of them. My friend Dinu Ivancu – a sound designer that makes movie trailers – tried out Quad Chaos and had some very lovely feedback of his own:

I love it JP!

[Quad Chaos] is a fantastic tool. I would love it even more if it had a few quality live options. Still though, as is, it’s an amazing tool to generate live and organic sounds out of ordinary samples. I’ll send you something I made with it and just two soft-synths. It’s fantastic. That reverb is AMAZING! Congrats – you guys did a great job. I’ll try to help [Quad Chaos] get to a wider audience as it’s very, very good for film work!

Dinu Ivancu

I think what Dinu is excited about here is the creation of small-but-detailed organic, improbable textures that are difficult or laborious to make in a very stern, organized DAW. Breaking down the strict boundaries of your DAW opens doors to creating sounds you’d hear in the real world that are completely off-sync and un-robotic. Quad Chaos also includes a built-in reverb to help create space for your sounds (and there are other effects included as well!).

Jason Corder, “Offthesky”, sent us a neat video of himself working with Quad Chaos. Jason shows us how you can record a song live, only using the plugin. It’s very spontaneous; he’s using the macros to create external automation to keep a minimum structure. This approach is something I didn’t initially think of, but seeing Jason do it makes me think that I’ll explore that avenue next time I use it!

You can get a copy of Quad Chaos here and if you make songs or videos, I’d be more than happy to see how you use it!

SEE ALSO : Creating tension in music

Synthesizer Basics

I’ve realized that using synths is a bit of an esoteric process for many (for me it definitely was for a while), so I’d like to share with you some synth basics. I used to read things online in-depth about synths, but didn’t feel like it was really helping me do what I wanted to exactly. Synths can create certain sounds, but the ability to shape these sounds into something you like is another task. When I dove in the modular rabbit hole, I felt like I needed to really grasp how to use them. After years of working with synths, presets have a actually provided me with many answers as to how things are made, and I’ve ended up learning more with presets than with tutorials. It’s probably useful for some to understand some basic concepts with regards to how to use synths in order to create lush or complex sounds, and in order to develop your own set of synth sounds. I’m not going to explain every synthesis concept, but I’ll cover some synth basics.

My personal go-to tools when I get to work with synths are Omnisphere, Pigments, and Ableton’s Operator. They all have different strengths and ways to work that I feel fulfill my needs. When people talk synths, they often discuss which ones are “best”, but I find that these three are pretty powerful, not only for the sounds they create, but for how they work. Speaking of workflow, if a synth doesn’t create something I like quickly, I usually get annoyed as I want to spend time making music and not just spend an hour designing a sound. In the case of these three, they all have several oscillators that can quickly be tweaked in a way you want.

Oscillators

Imagine the oscillator as a voice (I’ll explain polyphony another time which is a slightly different topic). The oscillator can shape sounds in various ways by creating a waveform: sine, square, triangle, saw, etc. The waveform has certain characteristics and difference waveforms have more or fewer harmonics. If you play a note, you’ll first see that it will create a fundamental frequency (as in, the note played has its own frequency), followed by the harmonics. Sine waves, because of their simplicity, will have basically no harmonics, but a saw wave will have a lot.

The sine wave is a fundamental frequency and has no harmonics.
A saw wave is different. The red arrow shows the fundamental frequency, and the green, the harmonics.

As you can see, sine and saw waves create different results, and you can combine them to create richer sounds. When there are more harmonics, the human ear tends to hear them as a sound that is richer, as it covers more frequencies (yes, this simple explanation for a more complex topic but I’ll leave for another time).

So what should you take away from this? Well, when you see a synth with multiple oscillators, realize that you can combine them in sound designing. One basic synth exercise I give to students is to start with one oscillator, like a sine wave, and then add a second one, pitched a bit higher (one octave) using a triangle wave, and use a 3rd oscillator that is a saw, pitched up again. If you play the same note, you’ll see the content is altered because the harmonics now interact to create new “sonic DNA”.

This simple starting point should pique your interest in exploring the combinations of different ways to set the oscillators in order to shape different sounds. Like I explained in past article, sounds are combinations of layers that create different outcomes; same goes for synths and oscillators. Synths are a rougher approach and it takes time at first to feel like you’re getting somewhere, but the more you practice, the better you get, and then you can event use a synth to bring richness to samples you layer. For example, I frequently use a low sub sine to give bottom to wimpy kick.

Envelopes

After deciding on the oscillator content of your synth, next comes shaping it. This is done with an envelope ADSR (Attack, Decay, Sustain, Release). The envelope tells your synth how to interact with the input MIDI notes you’re sending it. It waits for a note, and then depending on how the envelop is set, it will play the sound in a way that will shape both the amplitude (volume) and timing. For example, a fast attack means the sound will start playing as soon as the key is pressed, and a long release will let the sound continue playing for a little while after you release it. Each oscillator can have its own envelope, but you could have one general envelope as well. The use of envelopes is one of the best ways to give the impression of movement to a sound. I’m addicted to using the Max envelope patch and will assign it to a bunch of things on all my sounds, but I had to first learn how it worked by playing with it on a synth. While the envelope is modulating the amplitude, it can also be used to shape other characteristics too, such as the pitch.

Filters

You might already be familiar with filters as they’re built into DJ mixers; filters allow you to “remove” frequencies. In the case of a synth, what’s useful about filters is that most synths have filters that can be assigned by oscillator, or as a general way to “mold” all oscillators together. If you take a low pass filter, for example, and lower the frequency, you’ll see that you’ll smooth out the upper harmonics. In case of pads, it’s pretty common that multiple oscillators will be used to make a very rich sound but the filter is the key as you’ll want to dull out the result, making your pad less bright and defined.

LFOs

LFOs are modulators, and as you know, are one of my favorite tools. I use them on many things to add life and to give the impression of endless, non-repetitive changes. I’ll even sync them to a project and use them to accentuate or fix something. In most synths you can use LFOs to modulate one or multiple parameters, just like envelopes. What’s fun is to use a modulator to modulate another modulator; for example, I sometimes use LFOs to change the envelope, which helps give sounds different lengths for instance. Using LFOs on filters is also a good way to make variations in the presence of your harmonics, creating different textures.

Noise

One of the most misunderstood points in synthesis the use of noise. Noise is a good way to emulate an analog signal and to add warmth. One of the waveform types an oscillator can have is noise; white noise or other. You can add it in the high end or have it modulated by an envelope to track your keys. I like to keep noise waves very low in volume, and sometimes filter them a bit. But that said, I use a noise oscillator in every patch I design. Even a little bit of noise as a background layer can create a sense of fullness. If you record yourself with a microphone in an empty, quiet place, you’ll notice there’s always a bit of background noise. The human ear is used to noise and will be on the lookout for it. Hearing noise in a song or sound creates a certain sense of warmth.

Why do I love Omnisphere and Pigments?

Both Omnisphere and Pigments are very powerful for different reasons. Omnisphere is one of the most used software tools in the sound design industry, as well by composers who write film scores. Hans Zimmer is known to use it, among others. It has more oscillators that Operator, not just in quantity, but also in emulations of existing synths. Fore example, you could have your lower oscillator to be emulating a Juno, then add a Moog for the middle one and end up with an SH-101. I mean, even in real life, you can’t possibly do that unless you own all three of those synths, but even then it would be a bit of a mess to organize those all together. Plus, Omnisphere’s emulators sound true to the originals. If this isn’t convincing enough, Omnisphere also comes with a library of samples that you can use to layer on top of the oscillators, or import your own. Add one of the best granular synthesis modelers and you are set for endless possibilities.

Pigments by Arturia
Pigments by Arturia

Pigments is made by Arturia, and it was made with a very lovely graphical approach, where you have your modulators in the lower part of the UI and the sound frequencies in the upper part. You can then easily and quickly decide to add modulation to one parameter, then visually see it move. It’s one of those rare synths that has modulation at its core. This is why I love it; it provides me with numerous quick sounds resulting from deep or shallow exploration.

SEE ALSO : Using MIDI controllers in the studio

More tips about working with samples in Ableton

Recently I was doing some mixing and I came across multiple projects in a row that had some major issues with regards to working with samples in Ableton. One of them is a personal issue: taking a loop from a sample bank and using it as is, but there’s no real rule about doing this; if you bought the samples you are entitled to use them in any way you want.

While I do use samples in my work sometimes, I do it with the perspective that they are a starting point, or to be able to quickly pinpoint the mood of the track that I’m aiming for. There’s nothing more vibe-killing than starting to work on a new song but losing 30 minutes trying to find a fitting sound, like hi-hats for instance. One of my personal rules is to spend less than 30 minutes tweaking my first round of song production. This means that the initial phase is really about focusing in on the main idea of the song. The rest is accessory and could be anything. If you mute any parts except the main idea(s), the song will still be what it is.

So why is it important to shape the samples?

Well basically, the real answer is about tying it all together to give personality to the project you’re working on. You want it to work as a whole, which means you might want to start by tuning the sample to the idea.

Before I go on, let me give you a couple of suggestions regarding how to edit the samples in ways to make them unique.

I always find that pitch and length are the quickest ways to alter something and easily trick the brain into thinking the sounds are completely new. Even pitching down by 1 or 2 steps or shortening a sample to half its original size will already give you something different. Another trick is to change where the sample starts. For instance, with kicks, I sometimes like to start playing the sample later in the sound to have access to a different attack or custom make my own using the sampler.

TIP: I love to have the sounds change length as the song progresses, either by using an LFO or by manually tweaking the sounds. ex. Snares that gets longer create tensions in a breakdown.

In a past post, I covered the use of samples more in-depth, and I thought I could provide a bit more in detail about how you can spice things up with samples, but this time, using effects or Ableton’s internal tools.

Reverb: Reverb is a classic, where simply dropping it on a sound will alter it, but the down side is that it muffles the transients which can make things muddy. Solution: Use a Send/AUX channel where you’ll use a transient designer to (drastically) remove the attack of the incoming signal and then add a reverb. In doing this, you’ll be only adding reverb to the decay of the sound while the transient stays untouched.

Freeze-verb: One option you’ll find in the reverb from Ableton is the freeze function. Passing a sound through it and freezing it is like having a snapshot of the sound that is on hold. Resample that. I like to pitch it up or down and layering it with the original sound which allows you to add richness and harmonics to the original.

Gate: So few people use Ableton’s Gate! It’s one of my favorite. The best way to use it is by side-chaining it with a signal. Think of this as the opposite of a compressor in side-chaining; the gate will let the gated sound play only when the other is also playing, and you also have an envelope on it that lets you shape the sound. This is practical for many uses such as layering percussive loops, where the one that is side-chained will play only when it detects sound, which makes a mix way clearer. In sound design, this is pretty fun for creating multiple layers to a dull sound, by using various different incoming signals.

Granular Synthesis: This is by far my favorite tool to rearrange and morph sounds. It will stretch sounds, which gives them this grainy texture and something slightly scattered sounding too. Melda Production has a great granular synth that is multi-band, which provides lots of room to treat the layers of a sound in many ways. If you find it fun, Melda also has two other plugins that are great for messing up sound with mTransformer and mMorph.

Grain Delay, looped: A classic and sometimes overused effect, this one is great as you can automate pitch over delay. But it is still a great tool to use along with the Looper. They do really nice things when combined. I like to make really shorts loops of sounds going through the Grain Delay. This is also fun if you take the sound and double its length, as it will be stretched up, granular style, creating interesting texture along the way.

Resampling: This is the base of all sound design in Ableton, but to resample yourself tweaking a sound is by far the most organic way to treat sound. If you have PUSH, it’s even more fun as you can create a macro, assign certain parameters to the knobs and then record yourself just playing with the knobs. You can then chop the session to the parts you prefer.

I hope this was useful!

SEE ALSO : Learning how to make melodies

Creating organic sounding music with mixing

I’m always a bit reluctant to discuss mixing on this blog. The biggest mistake people make in mixing is to apply all the advice they can find online to their own work. This approach might not work, mostly because there are so many factors that can change how you approach your mix that it can be counter-productive. The best way to write about mixing would be to explain something and then include the many cascades of “but if…”, with regards to how you’d like to sound. So, to wrap things properly, I’ll cover one topic I love in music, which is how to get a very organic sounding music.

There are many ways to approach electronic music. There’s the very mechanical way of layering loops, which is popular in techno or using modular synths/eurorack. These styles, like many others, have a couple main things in mind: making people dance or showcasing craftsmanship in presenting sounds. One of the first things you want to do before you start mixing is to know exactly what style you want to create before you start.

Wherever you’re at and whatever the genre you’re working in, you can always infuse your mix with a more organic feel. Everyone has their own way, but sometimes it’s about finding your style.

In my case, I’ve always been interested in two things, which are reasons why people work with me for mixing:

  1. While I use electronic sounds, I want to keep them feeling as if they’re as organic and real as possible. You’ll have the impression of being immersed in a space of living unreal things and the clash between the synthetic and the real, which is for me, one of the most interesting things to listen to.
  2. I like to design spaces that could exist. The idea of putting sounds in place brings the listener into a bubble-like experience, which is the exact opposite of commercial music where a wall of sound is the desired aesthetic.

There’s nothing wrong with commercial music, it just has a different goal than I do in mixing.

What are some descriptions we can apply to an organic, warm, rounded sound?

  • A “real” sounding feel.
  • Distance between sounds to create the impression of space.
  • Clear low end, very rounded.
  • Controlled transients that aren’t aggressive.
  • Resonances that aren’t piercing.
  • Wideness without losing your center.
  • Usually a “darker” mix with some presence of air in the highs.
  • Keeping a more flat tone but with thick mids.

Now with this list in mind, there are approaches of how to deal with your mix and production.

Select quality samples to start with. It’s very common for me to come back to a client and say “I have to change your kick, clap and snare”, mostly because the source material has issues. Thi is because many people download crap sounds via torrents or free sites which usually haven’t been handled properly. See sounds and samples as the ingredients you cook food with: you want to compose with the best sounding material. I’m not a fan of mastered samples, as I noticed they sometimes distort if we compress them so I usually want something with a headroom. TIP: Get sounds at 24b minimum, invest some bucks to get something that is thick and clear sounding.

Remove resonances as you go. Don’t wait for a mixdown to fix everything. I usually make my loops and will correct a resonance right away if I hear one. I’ll freeze and flatten right away, sometimes even save the sample for future use. To fix a resonance, use a high quality EQ with a Q of about 5 maximum and then set your EQ to hear what you are cutting. Then you lower down of about 4-5db to start with. TIP: Use Fabfilter Pro-Q3, buy it here.

Control transients with a transient designer instead of an EQ. I find that many people aren’t sensitive of how annoying in a mix percussion can be if the transients are too aggressive. That can sometimes be only noticed once you compress. I like to use a Transient designer to lower the impact; just a little on the ones that are annoying. TIP: Try the TS-1 Transient Shaper, buy it here.

Remove all frequencies under the fundamental of the bass. This means removing the rogue resonances and to monitor what you’re cutting. If your bass or kick hits at 31hz, then remove anything under that frequency. EQ the kick and all other low end sound independently.

Support the low end with a sub since to add roundness. Anemic or confused low end can be swapped or supported by a sine wav synth that can be there to enhance the fundamental frequency and make it rounder. It make a big difference affecting the warmth of the sound. Ableton’s Operator will do, or basically any synth with oscillators you can design.

High-pass your busses with a filter at 12db/octave. Make sure you use a good EQ that lets you pick the slope and high-pass not so aggressively to have a more analog feel to your mix.

Thicken the mids with a multiband compressor. I like to compress the mids between 200 and 800. Often clients get it wrong around there and this range is where the real body of your song lies. The presence it provides on a sound system is dramatic if you control it properly.

Use clear reverb with short decay. Quality reverbs are always a game changer. I like to use different busses at 10% wet and with a very fast decay. Can’t hear it? You’re doing it right. TIP: Use TSAR-1 reverb for the win.

Add air with a high quality EQ. Please note this is a difficult thing to do properly and can be achieved with high-end EQ for better results. Just notch up your melodic buss with a notch up around 15khz. It add very subtle mix and is ear pleasing in little quantity. TIP: Turbo EQ by Melda is a hot air balloon.

Double Compress all your melodic sounds. This can be done with 2 compressors in parallel. The first one will be set to 50% wet and the second at 75%. The settings have to be played with but this will thicken and warm up everything.

Now for space, I make 3 groups: sounds that are subtle (background), sounds that are in the middle part of the space, and space that are upfront. A mistake many people make is to have too many sounds upfront and no subtle background sounds. A good guideline is 20% upfront as the stars of your song, then 65% are in the middle, and the remaining 15% are the subtle background details. If your balance is right, your song will automatically breathe and feel right.

All the upfront sounds are the ones where the volume is at 100% (not at 0db!), the ones in the middle are generally at 75%, and the others are varied between 50% to 30% volume. When you mix, always play with the volume of your sound to see where it sits best in the mix. Bring it too low, too loud, in the middle. You’ll find a spot where it feels like it is alive.

Lastly, one important thing is to understand that sounds have relationships to one another. This is sometimes “call and response”, or some are cousins… they are interacting and talking to each other. The more you support a dialog between your sounds, the more fun it is to listen to. Plus it makes things feel more organic!

SEE ALSO : More tips about working with samples in Ableton

Tips to add movement and life to your songs

One of the most popular topics in music production is with regards to making music feel “alive” by creating movement in music. While I already covered this topic in a past article, I’ll focus today on tools you can use and some techniques you can also apply to create movement.

First, let’s classify movement into categories:

  • Modulation (slow, fast)
  • Automation (micro, macro)
  • Chaos
  • Saturation

One of the thing that makes modular synths very popular is the possibility of controlling and modulating many parameters the way you want, but the other aspect that makes it exciting is the analog aspect. You’ve probably seen and heard multiple debates about the analog vs digital thing and perhaps, what’s funny is, many feel they know what this is about but yet, can’t really figure it out.

Take, for example, something we all know well: a clock that shows time.

An analog clock is one with needles that are moved by an internal mechanism, making them move smoothly in harmony while time goes by. There’s a very, very preciseness to it where you can see the tiny moment between seconds.

The digital or numeric clock jumps from second to second, minute to minute, with the numbers increasing: there are no smooth, slowly incrementing needle that moves between numbers; they just jump.

Sound is pretty much the same in a way. Once it’s digitized, the computer analyzes the information using sample and bit rates for precision. The flow isn’t the same, but you need a really precise system and ear to spot the difference. Some people do but it’s very rare. This is why, in theory, there’s a difference between digital files and vinyl records.

One eye opener for me was that when I was shopping for modulars at the local store, I was talking with the store’s specialist who was passionate about sound. “The one thing I don’t like about samples is, the sound is frozen and dead”, he said. With modular synths, because there’s often an analog component, the sound, on a microscopic level, is never the same twice.

This is why using samples and playing with digital tools on your DAW, needs a bit of magic to bring it all to life.

Modulation

By modulation, we’re referring to tools that move parameters for you, based on how you have configured them. The two main modulators you can use are:

  • LFOs: As in Low Frequency Oscillators. These will emit a frequency in a given shape (ex. sine, triangular, square, etc.), and a certain speed. They can be synced to your song’s tempo or not. LFOs are often included in synths but you can also find once instances in the Max for live patches.
  • Envelopes: Envelopes react to incoming signal and then will be shaped in how you want. Compressors, as we discussed recently, kinda work with an envelop principle.

There are multiple aspects of a sound you can modulate. While there are numerous tools out there to help you with that, it’s good to know that there are a few things you can do within your DAW. The main things you can modulate are:

  • Amplitude (gain, volume): Leaving the level of a sound to the same position for a whole track is very static. While there’s nothing wrong with that, it’s means that the sound is lacking dynamics.
  • Stereo position (panning): Sounds can move from left to right if you automate the panning or use a autopanner.
  • Distance (far, close): This is a great thing to automate. You can make sounds go further away by high passing, filtering to higher frequencies. Combined with the volume, it really push the sound away.
  • Depth (reverb): Adding reverb is a great way to add space and if you modulate, it makes things very alive.
  • Sound’s length (ADSR, gating): If you listen to drummers, they’ll hit their percussion so that the length constantly changes. This can be done by modulating a sampler’s ADSR envelope.
  • Filtering: A filter’s frequency and resonance changing position as the song changes offers a very ear pleasing effect.

Some effects that are modulating tools you already know are chorus, flanger, autopan, phaser, and reverb. They all play with the panning and also depth. Adding more than 2-3 instances in a song can cause issues so this is why it’s good to approach each channels individually.

My suggestion: Have one LFO and one envelope on every channel and map them to something: EQ, filter, panning, gain, etc.

Some amazing modulators that offer really good all in one options that you might really enjoy (as I do for quick fix on a boring stem):

QuatroMod

LFO Tool by XFER Records

ShaperBox by Cableguys – My go to to really bring sound to life.

Movement by Output  – This one is stellar and really can make things feel messy if pushed too far but the potential is bonkers. You instantly turn anything into a living texture that is never boring.

AUtomation

Automation is what you draw in your DAW that allows you to make a quick-moving or long-evolving effect. You might already know this but you’d be surprised to know that it is too often, under used. How can you know this though?

I have my own set of rules and here are some:

  • Each channel must at least have one long, evolving movement. I’m allergic to straight lines and will sometimes slightly shift points to have them have smallest slant. My go: amplitude, EQ or filters.
  • In a drop down list of each potential parameters, I want to have at least 3 things moving.
  • Each channels, must have at least 3 quick, unique, fast change.
  • Include at least 3-5 recorded live tweaks. I like to take a midi controller and map certain parameters and then play with the knobs, faders. I record the movements and then I can edit them wherever I want in the song. This human touch really makes something special.

While working with automation, one thing I love is to use Max for live patches that create variations, record them as automation and then edit them. It’s like having an assistant. There are great options to chose from but my favorites would be:

Chaos

By “chaos” I mean using random generators. They would fit under the umbrella of modulators but I like to put them in their own world. There are multiple uses of generators. You can take any LFO and switch them to a signal that is random to make sure there’s always a variable that changes. This is particularly useful with amplitude, filtering. It really adds life. You can also use the random module in the MIDI tools to add some life. Same with the use of humanizer on a midi channel. Both will make sure the notes are changing a little, all the time.

Saturation

If we think of the earlier example of how analog gear is constantly moving, using a saturator is a good way to bend perception. We previously discussed saturators in an earlier post but we didn’t talk of a super useful tool named Channel strip which often has an analog feel included. It remains transparent but it does something to the signal that is moving it away from a sterile digital feel.

My favorite channel strips would be:

The Virtual Mix rack by Slate Digital. Raw power.

McDSP Analog channel

Slam Pro

 

SEE ALSO : Getting feedback on your music