Tag Archive for: sound design

Using Modular Can Change the Way You View Music Production

Are “sound design” and “sequencing” mutually exclusive concepts? Do you always do one before you do the other? What about composition—how does that fit in? Are all of these concepts fixed, or do they bend and flex and bleed into one another?

The answers to these questions might depend on the specific workflows, techniques, and equipment you use.

Take, for example, an arpeggiator in a synth patch. There are two layers of sequencing to produce an arpeggio: the first layer is a sustained chord, the second layer is the arpeggiator. Make the arpeggiator run extremely fast, in terms of audio rate, and we no longer have an audible sequence made up of a number of discrete notes, but a complex waveform with a single fundamental. Just like that, sequencing has become sound design.

These two practices—sequencing and sound design—are more ambiguous than they seem.

Perhaps we only see them as distinct from each other because of the workflows that we’re funneled towards by the technologies we use. Most of the machines and software we use to make electronic music reflect the designer’s expectations about how we work: sound design is what we are doing when we fill up the banks of patch slots on our synths; sequencing is what we do when we fill up the banks of pattern slots on our sequencers.

The ubiquity of MIDI also promotes the view of sequencing as an activity that has no connection to sound design. Because MIDI cannot be heard directly, and only deals with pitch, note length, and velocity, we tend to think that that’s all sequencing is. But in a CV and Gate environment, sequencers can do more than sequence notes—they can sequence any number of events, from filter cutoff adjustments to clock speed or the parameters of other sequencers.

Modular can change the way you see organized sound

Spend some time exploring a modular synthesizer and these sharply distinct concepts quickly start to break down and blur together.

Most people don’t appreciate how fundamentally, conceptually different CV and gate is to MIDI. MIDI is a language, which has been designed to according to certain preconceptions (the tempered scale being the most obvious one). CV and gate, on the other hand, are the same stuff that audio is made of…voltage, acting directly upon circuits with no layer of interpretation in between. Thus, a square wave is not only an LFO when slowed down, or a tone when sped up, but it is also a gate.

What that square wave is depends entirely on how you are using it.

You can say the same thing about most modules. They are what you use them for.

Maths from MakeNoise. It’s a modulator. No, it’s a sound source. No, it’s a modulator.

To go back to our original example: a sequencer can be clocked at a rate that produces a distinct note, and that clock’s speed can itself be modulated by an LFO, so the voice that the sequencer is triggering goes from a discrete note sequence, to a complex waveform tone, and back again. The sound itself goes from sequence to sound effect and back to sequence…

Do you find this way of looking at music-making productive and enjoyable, or do you prefer to stick to your well-trodden workflows? Does abandoning the sound design – sequencing – composition paradigm sound like a refreshing, freeing change to you? Or does it sound like a recipe for never finishing another track ever?

SEE ALSO : “How do I get started with modular?”

Are Music Schools Worth The Investment?

Whether or not music schools are worth the money might spur a heated debate—schools worldwide might not like what I’m about to say, but I think that this topic needs to be addressed. What’s outlined in this post is based on my personal experience(s); I invite anyone who want to discuss this topic further, to contact me if necessary.

Music schools: an overview

Many people over the last few years have been asking me about my opinion regarding enrolling in music production schools. There are many production and engineering schools in the world, and a lot of them ask for a lot of money to attend. In Montreal, we have Musitechnic (where I have previously taught mastering and production) and Recording Arts. Most major cities around the world have at least one engineering school and if not, people can still study electro-acoustics at Universities. University takes at least 3 years to get a degree; most private schools will condense the material over 1 year. During that time, the physics of sound will be studied, mixing, music production in DAWs, recording, and sometimes mastering. While each of these subject usually take years to really master, the introduction to each can be very useful as you’ll learn the terms and logic of how these tasks work and what they are for.

If the teachers are good at explaining their topic(s) and have a solid background, there’s nothing quite like being in the presence of someone with a great deal of experience, not only for the valuable information they provide, but also, the interpersonal context. Having a good teacher will pay off if you ask questions and are curious. While I don’t teach at Musitechnic anymore, some of my past students are still in contact with me and ask me questions—I even hired some for internships. I’ve often been told by many students that they remembered more from hearing about their teacher’s experience(s) than the class content or material.

One issue with audio teachers I hear about a lot is that many times, teachers might be stuck in a specific era or on a precise genre, which might be difficult for a student to relate to; there might be a culture clash or a generation gap between themselves and the teacher.

For instance, if a school has teachers who are from the rock scene, many people who are interested in electronic music or hip hop will have a really hard time connecting with them. Similarly, sometimes the teachers who make electronic music can even be from a totally different sphere as well, and mentalities and approaches can clash.

The advantages of attending a school or program

There are, however, many beneficial outcomes from attending a music school:

  • you’ll get a solid foundation of the understanding of audio engineering, and get validation from experts.
  • you’ll end up getting a certificate that is recognized in the industry.
  • you’ll have access to resources, equipment and experienced teachers that you might not otherwise find.

The main issue I have with some music schools is how they sell “the dream”, in most cases. The reality of the music industry is really harsh. For instance, a school might tell students that when they graduate, they can open a studio or work for one. While after graduating you might have some skills and experience that you didn’t have before, nothing guarantees that people will come to you to have their music mixed. That said, getting your first client(s) will eventually bring in other clients and opportunities.

“What’s the best way to get a full time job in the music industry or to become an engineer?” I’m often asked, and I’m very careful about how I answer this question. I described my thoughts on finding full-time work in the music industry in a previous post, but I’ll share some points about this topic again here and how it relates to music schools:

  • Whatever anyone tells you or teaches you, even if you applied what they say to the finest level of detail, it’s likely that things still won’t work out the way you envision them. I know this sounds pessimistic, but the reality is that no path will provide the same results for anyone else in the music/audio world.
  • The industry is constantly changing and schools aren’t always following fast enough. If you want to make things work, you need to make sure that you can teach yourself new skills, and fast—being self-sufficient is critical to “make it” out there.
  • Doing things and learning alone is as difficult as going to school, but will be less expensive. The thing a school will provide is a foundation of knowledge that is—without question—valuable. For instance, the physics of sound won’t change in the future (unless one day we have some revolutionary finding that contradict the current model; this is not going to come in anytime soon).
  • Clients don’t always care where you’re from or what your background is, as long as they get results they like. Your reputation and portfolio might speak more for itself than saying you went to “School of X”. Where schools or your background can be a deal-breaker though, is if you apply to specific industries, such as video game companies, and maybe you already have some experience with the software they use—companies will see that as a bonus. But I know sound designers for some of those companies who’ve told me that your portfolio of work matters more. For instance, one friend told me that they really like when a candidate takes a video and then completely re-makes the audio and sound design for it; this is more important than even understanding specific software which can always be learned at a later time.
  • The most important thing is to make music, daily, and to record ideas, on a regular basis. Finishing songs that are quality (see my previous post about getting signed to labels) and having them exposed through releases with labels, by posting them on Youtube channels, self-releasing on Bandcamp, or filling up your profile on Soundcloud can all be critical to reaching potential clients. One of the main reasons I am able to work as an audio engineer and have my own clients is mostly due to the reputation as a musician I built a while ago. I often get emails of people who say they love my music and that was one of the main reasons they want their music to be worked by me specifically. Not many schools really teach the process of developing aesthetics (i.e. “your sound”) or the releasing process. While some do, both of those topics also change quickly, and you need to adapt. I’ve been feeling like every 6 months something changes significantly, but knowing some basics of how to release music certainly helps.

Would I tell someone not to attend a music school?

Certainly not. Some people do well in a school environment, and similarly, some people don’t do well at all on their own. So knowing where you fit most is certainly valuable in your own decision-making about schools. Perhaps a bit of both worlds would be beneficial.

Will a school get you a job in the audio world?

Absolutely not—this is a myth that I feel we need to address. It’s not okay to tell this to students or to market schools this way; it would be as absurd as saying that everyone who graduates from acting schools will find roles in movies and make a living from acting.

What are the alternatives to music schools?

If you don’t think music school is for you—because you don’t have the budget for it, or you’re concerned about the job market after, or even because you’re not someone who can handle class—there are still other options for you:

  • Take online classes. This is a no-brainer because there are a huge number of online classes, courses, and schools online, and you can even look for an international school. You can also work on classes during a time that fits into your schedule. This means you can invest some of your time off from work into it. Slate Digital has some nice online classes, as well as ADSR.
  • Become a YouTube fiend. YouTube has a lot of great content if you’re good at finding what you need. You can create a personal playlist of videos that address either a technique or a topic that is useful. There are also videos where you see people actually working, and they’re usually insightful.
  • Get a mentor. People like myself or others in the industry are usually happy to take students under their wing. While you can find most information online, one advantage of having a mentor is to speed up the search for precise information. How can you learn a precise technique for a problem if you don’t even know what it is? Well, someone with experience can teach you the vocabulary, teach you how to spot a specific sound, and teach you how to find information about it. “How do they make that sound?“, I sometimes hear, as some stuff feels magical to students until I explain that it’s a specific plugin. In my coaching group, we even have a pinned topic where we talk about certain sounds and how they’re made.

I hope this helps you make your own judgments about music schools!

SEE ALSO : On Going DAWless

Taking breaks from music-making

It’s strange how some topics seem to pop up in the music world again and again, both online and in person—taking breaks from music being one of them. During the summer in Canada most people—including musicians—don’t want stay indoors as much. Many musicians seem to get FOMO this time of year because they’re not making music. Other people I know are hit by writer’s block (including myself), and some people have asked me if I think music-making should be a daily routine or not. While I love this topic, there are multiple ways to approach music production routines and taking breaks from music; I’m sharing some of my own views here, which are based on my experience.

Taking breaks as you work

This usually surprises a lot of people, but when I work on production or mixing, I take a lot of breaks. I often notice that even after just 10 minutes of working hard, you can lose track of the tone of your song. You get used to what “works”, but the low end or the highs might be too much and you can’t tell because you’ve lost perspective. Even volume can be difficult to assess when your ears are fatigued; you might be playing too loud and not realize it.

Taking a 10-second or so break every 10-15 minutes can prevent fatigue and will help restore your understanding of your song.

If you’re in a creative mood and want to do more, I would strongly recommend taking a break after 1-hour to test the true potential of your music. If you’re familiar with this blog, you probably aren’t surprised to read that I recommend to actually stop working on a particular song after an hour and work on another one instead, or even do something completely different.

Taking breaks and making new songs

Sometimes you’ve made a bunch of songs and you feel like you’re repeating yourself, or worse, everything feels annoying (red flag: writer’s block ahead). Some people feel they need to take a break and not open their DAW at all for a while. Is that a good idea?

Yes and no.

My studio is in a building in Montreal that also houses other studios as well, with all kind of musicians. The ones that impress me the most are the jazz and classical musicians. They have a very, very intense schedule for practicing. In talking with them, they say that skipping just one day of practice has an impact on how they master their instrument(s). I can relate; when I take time off over a 3-day weekend, on the Monday I am a bit slower to figure out which tool works best for a specific situation. If I work on music, it takes me a bit more time to problem solve. In a way, I have to agree with the jazz and classical musicians here even though our music worlds are quite different.

The difference between me—as an audio engineer and electronic musician—and classical and jazz musicians, is that I’m constantly working in a space in which I need to invent new ideas, as opposed to practicing something over and over to master it. For my live sets and productions, I do rehearse and play my music—my workflow isn’t just mastering mouse-clicking around a screen. I humanly intervene by using MIDI controllers, mixing by hand, and when working on sound design I’ll also play with knobs too to create new ideas. I see creativity as a muscle that needs to stay fit to be powerful, but if you’re going to gym regularly, you know muscles also need rest in order to grow.

My conclusion on taking breaks from music is this: I think it’s important to work on audio-related tasks daily in order to stay focused, but when it comes to creating new ideas, creativity is not something that can be forced—it needs to come by itself, naturally. Whenever I push myself too hard to force an idea to come to life, it sounds wrong. The best ideas are spontaneous, often invented quickly, and done without much shaping.

So what does this mean for the musician?

Consider taking long breaks if you have really negative feelings towards what you do, or if you don’t feel good about making music. When taking time off from pursuing your own music creatively, what are some of the other alternatives and things you can do when you need downtime from working on your own songs?

Sound design. Try to see if you can spend time creating one sound you like from scratch, i.e. a pad.

Learn production techniques. You can register with online classes to learn something new; ADSR is full of examples with low prices.

Explore presets. Each effect or instrument you have has presets. You now have time to explore everything. The strength of knowing how many presets sound helps to be able to quickly access a specific aesthetic when needed.

Create templates. Have you considered creating a template for Ableton? I have multiple templates for sound design, mixing, jamming as well as song structure templates to play with.

Build macros. Use multiple effects and assign them to some knobs to see how you can alter sounds quickly.

Sample hunting. So many sites exists for finding samples, but finding time to shop is rare. You can do that now.

Build new references. If you don’t have a folder with reference tracks in it, it’s time to start, and if you do, add new ones. A good way is to make reference playlists on Soundcloud or YouTube.

Try demos and sample them. I love getting a bunch of VST/AU demos to try out and then sampling them. Eventually I get to know which new virtual synth or effect I really like.

Re-open projects that have been pending or recycle them. You might have unfinished songs and sometimes they are a good place to scavenge for samples or ideas to use in other songs.

Revisit past projects you’ve worked on and liked to remind yourself of methods you used that worked. Whenever I feel I need a break but still want to spend some time on music, I go through past projects to see how I worked and what could have been done better—I always learn something from revisiting old work.

All that said, most importantly, when you take a break from music, do not sell any gear or buy anything new. Just wait. If you like music and making it, chances are high that you’ll be doing it for years to come. Sometimes we need a break, but breaks don’t mean you have to give up completely. The feeling of needing a break is temporary—even if it’s a long break—but your love of music is permanently with you.

SEE ALSO : Are Music Schools Worth The Investment?

Using MIDI controllers in the studio

People often say that MIDI controllers are mostly for performing live, but they can also be your studio’s most useful tool. My advice to people who want to invest in gear—especially those who aren’t happy working only on a computer and dream of having tons of synths (modular and such)—is to start with investing in a controller first.

There are multiple ways to use MIDI controllers; let me share some of my favourite techniques with you and give you advice to easily replicate them.

Controllers for performing in studio

One trend I’ve been seeing in the last few months is producers sharing how they perform their songs in-studio as a way to demonstrate all the possibilities found within a single loop. This is not new—many people like to take moments from live recordings and edit them into a song, but it’s becoming clear that after years and years of music that has been edited to have every single damn detail fixed, artists are realizing that this clinical approach to producing makes a track cold, soulless, robotic, and not organic sounding and in the end. If you’re still touching up details at version 76 of your song, this means you’ve probably heard it about 200 times—no one will ever listen to your track that many times. My advice is to leave some mistakes in the track, and let it have a raw side to it. Moodymann’s music, for example, is praised and in-demand because his super raw approach makes electronic feel very organic and real. Performing your music in studio to create this type of feeling is pretty simple; it’s super fun and it inspires new ideas too.

For in-studio jams, I recommend the Novation LaunchXL which has a combination of knobs and sliders, plus it’s a control surface; depending on where you are on the screen, it can adapt itself. For instance, with the “devices” button pressed, you can control the effects on a specific channel and switch the knobs to control the on-screen parameters.

When I make a new song using a MIDI controller, I’ll start by using a good loop. Then I’ll use my controller to quickly play on the different mixes I can create with that loop. Sometimes, for example, I want to try the main idea at different volumes (75%/50%/25%), or at different filter levels. Some sounds feel completely different and sound better when you filter them at 75%. Generally, I put on these effects on each of my loops: a 3-band EQ, filter, delay, utility (gain), and an LFO.

Next, I’ll record myself playing with the loop for a good 20 minutes so that I have very long stems of each loop. Then when it comes to arranging, I’ll pick out the best parts.

TIP: I sometimes like to freeze stem tracks to remove all effects and have raw material I can’t totally go back and fix endlessly.

Controllers for sound design

I find that the fun part of sound design involving human gestures comes from replicating oscillations a LFO can’t really do. It’s one thing to assign a parameter to a LFO for movement, but if you do it manually, there’s nothing quite like it—but the best part is to combine the best of both automated and human-created movements.

I use a programmed LFO for super fast modulation that I can’t do physically with my fingers, and then adjust it to the song’s rhythm or melody—just mild adjustments usually. For instance, you could have super fast modulation for a resonance parameter with an LFO or with Live’s version 10.1’s curves design, then with your controller, control the frequency parameter to give it a more organic feel.

Recently, I’ve been really enjoying a complementary modular ensemble for Live called Signal by Isotonik; it allows you to build your own signal flow to go a bit beyond the usual modules that you’ll get in Max for Live. Where I find Signal to be a huge win is when it’s paired with PUSH, which is by far the best controller you can get for sound design. PUSH gives you quick access to the different parameters of your tools, and if you make macros it becomes even more organized.

Controllers for arrangements

Using MIDI controllers in arrangements is, to me, where the most fun can come from; using them can completely change the idea of a song.

For instance, if your song has a 3-note motif that has the same velocity across the board, I love to modulate the volume of the 3 notes into different levels. When we speak, all the words we use in a sentence have different levels and tones. For example, if you say to someone “don’t touch that!”, depending on the intonation of any particular word, it can change the emphasis of what you’re saying. “DON’T touch that!” would be very different from “don’t touch THAT!” This same philosophy can apply to a 3-note melody; each note is a word and you can decide on which ones to emphasize and how a certain emphasis fits in your song’s main phrase or motif.

If you assign a knob or fader on your controller to the volume of the melody, you can also control the amplitude of each note. You can do this for the entire song, or you can copy the best takes and apply their movement to the entire song. I find that there will be a slight difference in modulation depending on if you use a knob or fader; each seem to have a different curve—when I play with each, they turn out differently (but perhaps that’s just me). Explore and see for yourself!

TIP: Using motorized faders can be a a huge game changer. Check out the Behringer X-Touch Compact.

Another aspect of controllers that people don’t often consider are foot pedals. If you’re the type who taps your foot while making music, you could perhaps take advantage of your twitching by applying that to a specific parameter. Check the Yamaha FC4A. Use it with PUSH and then you have a strong arsenal of options.

SEE ALSO : Equipment Needed to Make Music – Gear vs. Experience vs. Monitoring

Workflow Suggestions for Music Collaborations

One of the most underestimated approaches to electronic music is collaboration. It seems to me that because of electronic music’s DIY approach people believe they need to do absolutely everything themselves. However, almost every time I’ve collaborated with others I hear them say “wow, I can’t believe I haven’t done that before!” Many of us want to collaborate, but actually organizing a in-person session can be a challenge. In thinking about collaboration and after some powerful collaboration sessions of my own, I noted what aspects of our workflow helped to create a better outcome. I find that there are some do’s and don’ts in collaborating, so I’ve decided to share them with you in this post.

Have a plan

I know this sounds obvious, but the majority of people who collaborate don’t really have a plan and will just sit and make music. While this works to some degree, you’re really missing out on upping the level of fun that comes out of planning ahead. I’m not talking about big, rigid plans, but more so just to have an idea of what you want to accomplish in a session. Deciding you’ll jam can be plan in-itself, deciding to work on an existing track could be another, or working on an idea you’ve already discussed could be a more precise plan.

Personally, I like to have roles decided for each person before the session. For example, I might work on sound design while my partner might be thinking about arrangements. When I work with a musician, I usually already have in mind that this person does something I don’t do, or does it better that I can. The most logical way to work is to have each participant take a role in which they do what they do best.

If you expect yourself to get the most of sound design, mixing, beat sequencing, editing, etc., all at once, you’re probably going to end up a “Jack of all trades, master of nothing”. Working with someone else is a way to learn new things and to improve.

A good collaborative session creates a total sense of flow; things unfold naturally and almost effortlessly. With that in mind, having a plan gives the brain a framework that determines the task(s) you need to complete. One of the rules of working in a state of flow is to do something you know you do well, but to create a tiny bit of challenge within it.

Say “yes” to any suggestions

This is a rule that I really insist on, though it might sound odd at first. Even though sometimes an idea seems silly, you should say yes to it because you’ll never know where it will lead you unless you try it. I’ve been in a session where I’ve constantly had the impression that I was doing something wrong because we weren’t following the “direction” of the track I had in my head. But what if veering off my mental path leads us to something new and refreshing? What if my partner – based on a suggestion that made have seemed wrong at first – accidentally discovered a sound we had no idea would fit in there?

This is why I find that the “yes” approach is an absolute win.

Saying yes to everything often just flows more naturally than saying no. However, if the “yes” approach doesn’t work easily, don’t force it; it’s much better to put an idea aside and return to it another day if it’s not working.

Trust your intuition; listen to your inner dialogue

When you work with someone else, you have another person who’s also hearing what you’re hearing, and will interact with the same sounds and try new things. This new perspective disconnects you from your work slightly and gives you a bit of distance. If you pay attention, you’ll notice that your inner dialogue may go something like “oh I want a horn over that! Oh, lets bring in claps!” That inner voice is your intuition, your culture, and your mood, throwing out ideas; sharing these ideas with one another can help create new experiments and layers in your work.

Combining this collaborative intuition with a “yes” attitude will greatly speed up the process of completing a track. Two people coming up with ideas for the same project often work faster and better than one.

Take a lot of breaks

It’s easy to get excited when you’re working on music with another person, and when you do, some ideas might feel like they’re the “best new thing”, but these same ideas could actually be pretty bad. You need time away from them to give yourself perspective; take breaks. I recommend pausing every 10 minutes. Even pausing for a minute or two to talk or to stand up and stretch will make a difference in your perceptions of your new ideas.

Centralize your resources

In collaborating, when you reach the point of putting together your arrangements, I would say that it’s important to have only one computer as the main control station for your work. Ideally you’d want an external hard-drive that you can share between computers easily; this way you can use everyone’s plugins to work on your sounds. One of the most useful things about teaming up with someone else is that you get access to their resources, skills, materials, and experience. Make sure to get the most out of collaborating by knowing what resources you can all drawn upon, and then select a few things you want to focus your attention on. It’s easy to get distracted or to think you need something more, but I can tell you that you can do a lot with whatever tools you have at that moment. Working with someone else can also open your eyes to tools you perhaps didn’t fully understand, were not using properly, or not using to their full potential.

Online collaboration is different

Working with someone through the internet is a completely different business that working together in-person. It means that you won’t work at the same time and some people also work more slowly or more quickly than yourself. I’ve tried collaborating with many people online and it doesn’t always work. It takes more than just the will of both participants to make it work, it demands some cohesion and flexibility. All my previous points about collaborating in-person also apply to collaborating online. Assigning roles and having a plan really helps. I also find that sharing projects that aren’t working for me with another person will sometimes give them a new life.

If you’re a follower of this blog, you’ll often read that one of the most important things about production that I stress is to let go of your tracks; this is something very essential in collaborating. I usually try to shut-off the inner voice that tells me that my song is the “next hit” because thinking this way usually never works. No one controls “hits”, and being aware of that is a good start. That said, when you work with someone online, since this person is not in the room with you and he/she might work on the track while you’re busy with something else, I find works best to be relaxed about the outcome. This means that if I have a bad first impression with what I’m hearing from the person I’m working with, I usually wait a good 24h before providing any feedback.

What if you really don’t like what your partner is making?

Not liking your partner’s work is probably the biggest risk in collaborating. If things are turning out this way in your collaboration, perhaps you didn’t use a reference track inside the project, or didn’t set up a proper mood board. A good way to avoid problems in collaboration is to make sure that you and your partner are on the same page mentally and musically before doing anything. If you both use the same reference track, for example, it will greatly help to avoid disasters. If you don’t like a reference track someone has suggested, I recommend proposing one you love until everyone agrees. If you and your partner(s) never agree, don’t push it; maybe work with someone else.

The key to successful collaborations is to keep it simple, work with good vibes only, and to have fun.

SEE ALSO : Synth Basics

Using Quad Chaos

I’m proud to announce the release of our first patch – Quad Chaos. I met Armando, the programmer, on the Max/MSP group on Facebook and his background was exactly what I was looking for and we got along very well. Quad Chaos is basically a patch version of what this blog is about: finding ways to have innovative sound design through modulation and chaos.

Speaking of chaos, the only “rule” for using Quad Chaos is to resample everything you do, because we intentionally wanted it to be something that works ephemerally; something you can’t really control and just have to go with. There are many tools out there you can use to do anything you want, but we wanted to create something experimental that can be fun and creative at the same time.

Make sure these knobs are up!

The first thing that appears when you load up Quad Chaos is a screen in which you can add up to four samples. If you hear nothing when you load in a sound, you probably need to raise the volume, direction, or panning. In the demo video, Armando has used short samples, but I find that the magic truly comes together when you load up longer files such as field recordings, things that are four bars long, or even melodic content. I don’t really find that Quad Chaos works well if you load a sample that has multiple instruments in it, but I still need to explore it more and I could be wrong about that. My advice is to start with one sample that you load into Quad Chaos, and then with your mouse, highlight a portion of it. Personally, I like to start with a small selection based on the waveform content I see. I’ll try to grab one note/sound, along with some silence. Once you make a selection, you’ll hear a loop playing that might sound like something in a techno track…but this is just the beginning.

While it’s very tempting to load in all four samples at once, if you do things this way, Quad Chaos will get out of control quickly; I like to start with one layer and then build from there.

Once you isolate a section that loops to your taste, it’s time to engage the modulation. One trick that I like to do with any synths or gear is to move one knob to its maximum and then minimum, quickly then slowly, to simulate what an LFO could do. When I find something I like, then I’ll assign an LFO or envelope to it and start my tests.

For example, in Quad Chaos you can assign the first modulator to a direction; you click on “dir” and you’ll see numbers underneath, which represent the modulation source. To access to the modulation section, use the drop down menu and pick “mod” and you’ll see the first modulation.


Depending on how you set it up, you’ll start hearing results as your sound now has modulation on and in full effect. I know the lack of sync in the plugin might seem odd, but to repeat myself, a lack of sync is needed to create “chaos” and this approach gives more of an analog feel to what you make; you can get some pretty polyrhythmic sequences because of this as well.

As I mentioned earlier, how I start my sound is usually just with an LFO set to sine curve and the I explore slow/fast oscillation to see what kind of results I get. I’ll find a sweet spot somewhere in the middle or something, then I’ll try all the different oscillations to hear other results. I’m very much into the random signal just because it will create the impression of constantly “moving” sonic results. Afterwards, I have a lot of fun scrolling through the recorded results of these experiments and then from them I pick one-bar loops/sections. I find that the random signal is always the one that gives me pretty interesting hooks and textures.

Once you’re happy with the first layer you’ve created with the first loop, you can use the other loops to create complex ideas or simply to add a bit of life to the first one. I’ve seen a few artists using Quad Chaos already and everyone seems to comes up with really different use-cases and results. One thing I often see is people dropping some important samples of a production they’re currently working on into the plugin to get some new ideas out of them. My friend Dinu Ivancu – a sound designer that makes movie trailers – tried out Quad Chaos and had some very lovely feedback of his own:

I love it JP!

[Quad Chaos] is a fantastic tool. I would love it even more if it had a few quality live options. Still though, as is, it’s an amazing tool to generate live and organic sounds out of ordinary samples. I’ll send you something I made with it and just two soft-synths. It’s fantastic. That reverb is AMAZING! Congrats – you guys did a great job. I’ll try to help [Quad Chaos] get to a wider audience as it’s very, very good for film work!

Dinu Ivancu

I think what Dinu is excited about here is the creation of small-but-detailed organic, improbable textures that are difficult or laborious to make in a very stern, organized DAW. Breaking down the strict boundaries of your DAW opens doors to creating sounds you’d hear in the real world that are completely off-sync and un-robotic. Quad Chaos also includes a built-in reverb to help create space for your sounds (and there are other effects included as well!).

Jason Corder, “Offthesky”, sent us a neat video of himself working with Quad Chaos. Jason shows us how you can record a song live, only using the plugin. It’s very spontaneous; he’s using the macros to create external automation to keep a minimum structure. This approach is something I didn’t initially think of, but seeing Jason do it makes me think that I’ll explore that avenue next time I use it!

You can get a copy of Quad Chaos here and if you make songs or videos, I’d be more than happy to see how you use it!

SEE ALSO : Creating tension in music

Synthesizer Basics

I’ve realized that using synths is a bit of an esoteric process for many (for me it definitely was for a while), so I’d like to share with you some synth basics. I used to read things online in-depth about synths, but didn’t feel like it was really helping me do what I wanted to exactly. Synths can create certain sounds, but the ability to shape these sounds into something you like is another task. When I dove in the modular rabbit hole, I felt like I needed to really grasp how to use them. After years of working with synths, presets have a actually provided me with many answers as to how things are made, and I’ve ended up learning more with presets than with tutorials. It’s probably useful for some to understand some basic concepts with regards to how to use synths in order to create lush or complex sounds, and in order to develop your own set of synth sounds. I’m not going to explain every synthesis concept, but I’ll cover some synth basics.

My personal go-to tools when I get to work with synths are Omnisphere, Pigments, and Ableton’s Operator. They all have different strengths and ways to work that I feel fulfill my needs. When people talk synths, they often discuss which ones are “best”, but I find that these three are pretty powerful, not only for the sounds they create, but for how they work. Speaking of workflow, if a synth doesn’t create something I like quickly, I usually get annoyed as I want to spend time making music and not just spend an hour designing a sound. In the case of these three, they all have several oscillators that can quickly be tweaked in a way you want.

Oscillators

Imagine the oscillator as a voice (I’ll explain polyphony another time which is a slightly different topic). The oscillator can shape sounds in various ways by creating a waveform: sine, square, triangle, saw, etc. The waveform has certain characteristics and difference waveforms have more or fewer harmonics. If you play a note, you’ll first see that it will create a fundamental frequency (as in, the note played has its own frequency), followed by the harmonics. Sine waves, because of their simplicity, will have basically no harmonics, but a saw wave will have a lot.

The sine wave is a fundamental frequency and has no harmonics.
A saw wave is different. The red arrow shows the fundamental frequency, and the green, the harmonics.

As you can see, sine and saw waves create different results, and you can combine them to create richer sounds. When there are more harmonics, the human ear tends to hear them as a sound that is richer, as it covers more frequencies (yes, this simple explanation for a more complex topic but I’ll leave for another time).

So what should you take away from this? Well, when you see a synth with multiple oscillators, realize that you can combine them in sound designing. One basic synth exercise I give to students is to start with one oscillator, like a sine wave, and then add a second one, pitched a bit higher (one octave) using a triangle wave, and use a 3rd oscillator that is a saw, pitched up again. If you play the same note, you’ll see the content is altered because the harmonics now interact to create new “sonic DNA”.

This simple starting point should pique your interest in exploring the combinations of different ways to set the oscillators in order to shape different sounds. Like I explained in past article, sounds are combinations of layers that create different outcomes; same goes for synths and oscillators. Synths are a rougher approach and it takes time at first to feel like you’re getting somewhere, but the more you practice, the better you get, and then you can event use a synth to bring richness to samples you layer. For example, I frequently use a low sub sine to give bottom to wimpy kick.

Envelopes

After deciding on the oscillator content of your synth, next comes shaping it. This is done with an envelope ADSR (Attack, Decay, Sustain, Release). The envelope tells your synth how to interact with the input MIDI notes you’re sending it. It waits for a note, and then depending on how the envelop is set, it will play the sound in a way that will shape both the amplitude (volume) and timing. For example, a fast attack means the sound will start playing as soon as the key is pressed, and a long release will let the sound continue playing for a little while after you release it. Each oscillator can have its own envelope, but you could have one general envelope as well. The use of envelopes is one of the best ways to give the impression of movement to a sound. I’m addicted to using the Max envelope patch and will assign it to a bunch of things on all my sounds, but I had to first learn how it worked by playing with it on a synth. While the envelope is modulating the amplitude, it can also be used to shape other characteristics too, such as the pitch.

Filters

You might already be familiar with filters as they’re built into DJ mixers; filters allow you to “remove” frequencies. In the case of a synth, what’s useful about filters is that most synths have filters that can be assigned by oscillator, or as a general way to “mold” all oscillators together. If you take a low pass filter, for example, and lower the frequency, you’ll see that you’ll smooth out the upper harmonics. In case of pads, it’s pretty common that multiple oscillators will be used to make a very rich sound but the filter is the key as you’ll want to dull out the result, making your pad less bright and defined.

LFOs

LFOs are modulators, and as you know, are one of my favorite tools. I use them on many things to add life and to give the impression of endless, non-repetitive changes. I’ll even sync them to a project and use them to accentuate or fix something. In most synths you can use LFOs to modulate one or multiple parameters, just like envelopes. What’s fun is to use a modulator to modulate another modulator; for example, I sometimes use LFOs to change the envelope, which helps give sounds different lengths for instance. Using LFOs on filters is also a good way to make variations in the presence of your harmonics, creating different textures.

Noise

One of the most misunderstood points in synthesis the use of noise. Noise is a good way to emulate an analog signal and to add warmth. One of the waveform types an oscillator can have is noise; white noise or other. You can add it in the high end or have it modulated by an envelope to track your keys. I like to keep noise waves very low in volume, and sometimes filter them a bit. But that said, I use a noise oscillator in every patch I design. Even a little bit of noise as a background layer can create a sense of fullness. If you record yourself with a microphone in an empty, quiet place, you’ll notice there’s always a bit of background noise. The human ear is used to noise and will be on the lookout for it. Hearing noise in a song or sound creates a certain sense of warmth.

Why do I love Omnisphere and Pigments?

Both Omnisphere and Pigments are very powerful for different reasons. Omnisphere is one of the most used software tools in the sound design industry, as well by composers who write film scores. Hans Zimmer is known to use it, among others. It has more oscillators that Operator, not just in quantity, but also in emulations of existing synths. Fore example, you could have your lower oscillator to be emulating a Juno, then add a Moog for the middle one and end up with an SH-101. I mean, even in real life, you can’t possibly do that unless you own all three of those synths, but even then it would be a bit of a mess to organize those all together. Plus, Omnisphere’s emulators sound true to the originals. If this isn’t convincing enough, Omnisphere also comes with a library of samples that you can use to layer on top of the oscillators, or import your own. Add one of the best granular synthesis modelers and you are set for endless possibilities.

Pigments by Arturia
Pigments by Arturia

Pigments is made by Arturia, and it was made with a very lovely graphical approach, where you have your modulators in the lower part of the UI and the sound frequencies in the upper part. You can then easily and quickly decide to add modulation to one parameter, then visually see it move. It’s one of those rare synths that has modulation at its core. This is why I love it; it provides me with numerous quick sounds resulting from deep or shallow exploration.

SEE ALSO : Using MIDI controllers in the studio

More tips about working with samples in Ableton

Recently I was doing some mixing and I came across multiple projects in a row that had some major issues with regards to working with samples in Ableton. One of them is a personal issue: taking a loop from a sample bank and using it as is, but there’s no real rule about doing this; if you bought the samples you are entitled to use them in any way you want.

While I do use samples in my work sometimes, I do it with the perspective that they are a starting point, or to be able to quickly pinpoint the mood of the track that I’m aiming for. There’s nothing more vibe-killing than starting to work on a new song but losing 30 minutes trying to find a fitting sound, like hi-hats for instance. One of my personal rules is to spend less than 30 minutes tweaking my first round of song production. This means that the initial phase is really about focusing in on the main idea of the song. The rest is accessory and could be anything. If you mute any parts except the main idea(s), the song will still be what it is.

So why is it important to shape the samples?

Well basically, the real answer is about tying it all together to give personality to the project you’re working on. You want it to work as a whole, which means you might want to start by tuning the sample to the idea.

Before I go on, let me give you a couple of suggestions regarding how to edit the samples in ways to make them unique.

I always find that pitch and length are the quickest ways to alter something and easily trick the brain into thinking the sounds are completely new. Even pitching down by 1 or 2 steps or shortening a sample to half its original size will already give you something different. Another trick is to change where the sample starts. For instance, with kicks, I sometimes like to start playing the sample later in the sound to have access to a different attack or custom make my own using the sampler.

TIP: I love to have the sounds change length as the song progresses, either by using an LFO or by manually tweaking the sounds. ex. Snares that gets longer create tensions in a breakdown.

In a past post, I covered the use of samples more in-depth, and I thought I could provide a bit more in detail about how you can spice things up with samples, but this time, using effects or Ableton’s internal tools.

Reverb: Reverb is a classic, where simply dropping it on a sound will alter it, but the down side is that it muffles the transients which can make things muddy. Solution: Use a Send/AUX channel where you’ll use a transient designer to (drastically) remove the attack of the incoming signal and then add a reverb. In doing this, you’ll be only adding reverb to the decay of the sound while the transient stays untouched.

Freeze-verb: One option you’ll find in the reverb from Ableton is the freeze function. Passing a sound through it and freezing it is like having a snapshot of the sound that is on hold. Resample that. I like to pitch it up or down and layering it with the original sound which allows you to add richness and harmonics to the original.

Gate: So few people use Ableton’s Gate! It’s one of my favorite. The best way to use it is by side-chaining it with a signal. Think of this as the opposite of a compressor in side-chaining; the gate will let the gated sound play only when the other is also playing, and you also have an envelope on it that lets you shape the sound. This is practical for many uses such as layering percussive loops, where the one that is side-chained will play only when it detects sound, which makes a mix way clearer. In sound design, this is pretty fun for creating multiple layers to a dull sound, by using various different incoming signals.

Granular Synthesis: This is by far my favorite tool to rearrange and morph sounds. It will stretch sounds, which gives them this grainy texture and something slightly scattered sounding too. Melda Production has a great granular synth that is multi-band, which provides lots of room to treat the layers of a sound in many ways. If you find it fun, Melda also has two other plugins that are great for messing up sound with mTransformer and mMorph.

Grain Delay, looped: A classic and sometimes overused effect, this one is great as you can automate pitch over delay. But it is still a great tool to use along with the Looper. They do really nice things when combined. I like to make really shorts loops of sounds going through the Grain Delay. This is also fun if you take the sound and double its length, as it will be stretched up, granular style, creating interesting texture along the way.

Resampling: This is the base of all sound design in Ableton, but to resample yourself tweaking a sound is by far the most organic way to treat sound. If you have PUSH, it’s even more fun as you can create a macro, assign certain parameters to the knobs and then record yourself just playing with the knobs. You can then chop the session to the parts you prefer.

I hope this was useful!

SEE ALSO : Learning how to make melodies

Creating organic sounding music with mixing

I’m always a bit reluctant to discuss mixing on this blog. The biggest mistake people make in mixing is to apply all the advice they can find online to their own work. This approach might not work, mostly because there are so many factors that can change how you approach your mix that it can be counter-productive. The best way to write about mixing would be to explain something and then include the many cascades of “but if…”, with regards to how you’d like to sound. So, to wrap things properly, I’ll cover one topic I love in music, which is how to get a very organic sounding music.

There are many ways to approach electronic music. There’s the very mechanical way of layering loops, which is popular in techno or using modular synths/eurorack. These styles, like many others, have a couple main things in mind: making people dance or showcasing craftsmanship in presenting sounds. One of the first things you want to do before you start mixing is to know exactly what style you want to create before you start.

Wherever you’re at and whatever the genre you’re working in, you can always infuse your mix with a more organic feel. Everyone has their own way, but sometimes it’s about finding your style.

In my case, I’ve always been interested in two things, which are reasons why people work with me for mixing:

  1. While I use electronic sounds, I want to keep them feeling as if they’re as organic and real as possible. You’ll have the impression of being immersed in a space of living unreal things and the clash between the synthetic and the real, which is for me, one of the most interesting things to listen to.
  2. I like to design spaces that could exist. The idea of putting sounds in place brings the listener into a bubble-like experience, which is the exact opposite of commercial music where a wall of sound is the desired aesthetic.

There’s nothing wrong with commercial music, it just has a different goal than I do in mixing.

What are some descriptions we can apply to an organic, warm, rounded sound?

  • A “real” sounding feel.
  • Distance between sounds to create the impression of space.
  • Clear low end, very rounded.
  • Controlled transients that aren’t aggressive.
  • Resonances that aren’t piercing.
  • Wideness without losing your center.
  • Usually a “darker” mix with some presence of air in the highs.
  • Keeping a more flat tone but with thick mids.

Now with this list in mind, there are approaches of how to deal with your mix and production.

Select quality samples to start with. It’s very common for me to come back to a client and say “I have to change your kick, clap and snare”, mostly because the source material has issues. Thi is because many people download crap sounds via torrents or free sites which usually haven’t been handled properly. See sounds and samples as the ingredients you cook food with: you want to compose with the best sounding material. I’m not a fan of mastered samples, as I noticed they sometimes distort if we compress them so I usually want something with a headroom. TIP: Get sounds at 24b minimum, invest some bucks to get something that is thick and clear sounding.

Remove resonances as you go. Don’t wait for a mixdown to fix everything. I usually make my loops and will correct a resonance right away if I hear one. I’ll freeze and flatten right away, sometimes even save the sample for future use. To fix a resonance, use a high quality EQ with a Q of about 5 maximum and then set your EQ to hear what you are cutting. Then you lower down of about 4-5db to start with. TIP: Use Fabfilter Pro-Q3, buy it here.

Control transients with a transient designer instead of an EQ. I find that many people aren’t sensitive of how annoying in a mix percussion can be if the transients are too aggressive. That can sometimes be only noticed once you compress. I like to use a Transient designer to lower the impact; just a little on the ones that are annoying. TIP: Try the TS-1 Transient Shaper, buy it here.

Remove all frequencies under the fundamental of the bass. This means removing the rogue resonances and to monitor what you’re cutting. If your bass or kick hits at 31hz, then remove anything under that frequency. EQ the kick and all other low end sound independently.

Support the low end with a sub since to add roundness. Anemic or confused low end can be swapped or supported by a sine wav synth that can be there to enhance the fundamental frequency and make it rounder. It make a big difference affecting the warmth of the sound. Ableton’s Operator will do, or basically any synth with oscillators you can design.

High-pass your busses with a filter at 12db/octave. Make sure you use a good EQ that lets you pick the slope and high-pass not so aggressively to have a more analog feel to your mix.

Thicken the mids with a multiband compressor. I like to compress the mids between 200 and 800. Often clients get it wrong around there and this range is where the real body of your song lies. The presence it provides on a sound system is dramatic if you control it properly.

Use clear reverb with short decay. Quality reverbs are always a game changer. I like to use different busses at 10% wet and with a very fast decay. Can’t hear it? You’re doing it right. TIP: Use TSAR-1 reverb for the win.

Add air with a high quality EQ. Please note this is a difficult thing to do properly and can be achieved with high-end EQ for better results. Just notch up your melodic buss with a notch up around 15khz. It add very subtle mix and is ear pleasing in little quantity. TIP: Turbo EQ by Melda is a hot air balloon.

Double Compress all your melodic sounds. This can be done with 2 compressors in parallel. The first one will be set to 50% wet and the second at 75%. The settings have to be played with but this will thicken and warm up everything.

Now for space, I make 3 groups: sounds that are subtle (background), sounds that are in the middle part of the space, and space that are upfront. A mistake many people make is to have too many sounds upfront and no subtle background sounds. A good guideline is 20% upfront as the stars of your song, then 65% are in the middle, and the remaining 15% are the subtle background details. If your balance is right, your song will automatically breathe and feel right.

All the upfront sounds are the ones where the volume is at 100% (not at 0db!), the ones in the middle are generally at 75%, and the others are varied between 50% to 30% volume. When you mix, always play with the volume of your sound to see where it sits best in the mix. Bring it too low, too loud, in the middle. You’ll find a spot where it feels like it is alive.

Lastly, one important thing is to understand that sounds have relationships to one another. This is sometimes “call and response”, or some are cousins… they are interacting and talking to each other. The more you support a dialog between your sounds, the more fun it is to listen to. Plus it makes things feel more organic!

SEE ALSO : More tips about working with samples in Ableton

Tips to add movement and life to your songs

One of the most popular topics in music production is with regards to making music feel “alive” by creating movement in music. While I already covered this topic in a past article, I’ll focus today on tools you can use and some techniques you can also apply to create movement.

First, let’s classify movement into categories:

  • Modulation (slow, fast)
  • Automation (micro, macro)
  • Chaos
  • Saturation

One of the thing that makes modular synths very popular is the possibility of controlling and modulating many parameters the way you want, but the other aspect that makes it exciting is the analog aspect. You’ve probably seen and heard multiple debates about the analog vs digital thing and perhaps, what’s funny is, many feel they know what this is about but yet, can’t really figure it out.

Take, for example, something we all know well: a clock that shows time.

An analog clock is one with needles that are moved by an internal mechanism, making them move smoothly in harmony while time goes by. There’s a very, very preciseness to it where you can see the tiny moment between seconds.

The digital or numeric clock jumps from second to second, minute to minute, with the numbers increasing: there are no smooth, slowly incrementing needle that moves between numbers; they just jump.

Sound is pretty much the same in a way. Once it’s digitized, the computer analyzes the information using sample and bit rates for precision. The flow isn’t the same, but you need a really precise system and ear to spot the difference. Some people do but it’s very rare. This is why, in theory, there’s a difference between digital files and vinyl records.

One eye opener for me was that when I was shopping for modulars at the local store, I was talking with the store’s specialist who was passionate about sound. “The one thing I don’t like about samples is, the sound is frozen and dead”, he said. With modular synths, because there’s often an analog component, the sound, on a microscopic level, is never the same twice.

This is why using samples and playing with digital tools on your DAW, needs a bit of magic to bring it all to life.

Modulation

By modulation, we’re referring to tools that move parameters for you, based on how you have configured them. The two main modulators you can use are:

  • LFOs: As in Low Frequency Oscillators. These will emit a frequency in a given shape (ex. sine, triangular, square, etc.), and a certain speed. They can be synced to your song’s tempo or not. LFOs are often included in synths but you can also find once instances in the Max for live patches.
  • Envelopes: Envelopes react to incoming signal and then will be shaped in how you want. Compressors, as we discussed recently, kinda work with an envelop principle.

There are multiple aspects of a sound you can modulate. While there are numerous tools out there to help you with that, it’s good to know that there are a few things you can do within your DAW. The main things you can modulate are:

  • Amplitude (gain, volume): Leaving the level of a sound to the same position for a whole track is very static. While there’s nothing wrong with that, it’s means that the sound is lacking dynamics.
  • Stereo position (panning): Sounds can move from left to right if you automate the panning or use a autopanner.
  • Distance (far, close): This is a great thing to automate. You can make sounds go further away by high passing, filtering to higher frequencies. Combined with the volume, it really push the sound away.
  • Depth (reverb): Adding reverb is a great way to add space and if you modulate, it makes things very alive.
  • Sound’s length (ADSR, gating): If you listen to drummers, they’ll hit their percussion so that the length constantly changes. This can be done by modulating a sampler’s ADSR envelope.
  • Filtering: A filter’s frequency and resonance changing position as the song changes offers a very ear pleasing effect.

Some effects that are modulating tools you already know are chorus, flanger, autopan, phaser, and reverb. They all play with the panning and also depth. Adding more than 2-3 instances in a song can cause issues so this is why it’s good to approach each channels individually.

My suggestion: Have one LFO and one envelope on every channel and map them to something: EQ, filter, panning, gain, etc.

Some amazing modulators that offer really good all in one options that you might really enjoy (as I do for quick fix on a boring stem):

QuatroMod

LFO Tool by XFER Records

ShaperBox by Cableguys – My go to to really bring sound to life.

Movement by Output  – This one is stellar and really can make things feel messy if pushed too far but the potential is bonkers. You instantly turn anything into a living texture that is never boring.

AUtomation

Automation is what you draw in your DAW that allows you to make a quick-moving or long-evolving effect. You might already know this but you’d be surprised to know that it is too often, under used. How can you know this though?

I have my own set of rules and here are some:

  • Each channel must at least have one long, evolving movement. I’m allergic to straight lines and will sometimes slightly shift points to have them have smallest slant. My go: amplitude, EQ or filters.
  • In a drop down list of each potential parameters, I want to have at least 3 things moving.
  • Each channels, must have at least 3 quick, unique, fast change.
  • Include at least 3-5 recorded live tweaks. I like to take a midi controller and map certain parameters and then play with the knobs, faders. I record the movements and then I can edit them wherever I want in the song. This human touch really makes something special.

While working with automation, one thing I love is to use Max for live patches that create variations, record them as automation and then edit them. It’s like having an assistant. There are great options to chose from but my favorites would be:

Chaos

By “chaos” I mean using random generators. They would fit under the umbrella of modulators but I like to put them in their own world. There are multiple uses of generators. You can take any LFO and switch them to a signal that is random to make sure there’s always a variable that changes. This is particularly useful with amplitude, filtering. It really adds life. You can also use the random module in the MIDI tools to add some life. Same with the use of humanizer on a midi channel. Both will make sure the notes are changing a little, all the time.

Saturation

If we think of the earlier example of how analog gear is constantly moving, using a saturator is a good way to bend perception. We previously discussed saturators in an earlier post but we didn’t talk of a super useful tool named Channel strip which often has an analog feel included. It remains transparent but it does something to the signal that is moving it away from a sterile digital feel.

My favorite channel strips would be:

The Virtual Mix rack by Slate Digital. Raw power.

McDSP Analog channel

Slam Pro

 

SEE ALSO : Getting feedback on your music

Saturation Tips and Hacks

After presenting some of my favorite EQs and compressors, it would be silly not to also talk about audio saturation which is complementary tool. There’s not a single project I’ve done in the last 10 years where I haven’t used saturation in one way or another; same with mastering. I often compare it to putting some words in bold in a text, where that effect will do the same thing in a mix: making parts stand out in a way the brain can’t totally understand at first.

What is saturation exactly?

Saturation is essentially a form of soft distortion that gives certain texture to sounds. The simplest way to explain it is to think of how analog processing changes sound; it brings a certain noise it, sometimes very subtly or not. You may use it give warmth or character to the signal being processed, which gives a more aggressive crunch if you exaggerate it. Types of saturation that are most common:

  • Tape emulation: Similar to what was popular in the disco days when they’re send their mixes to a reel to reel, to provide a certain thickness.
  • Tubes: Common in compressors and certain EQs using lamps, they are the absolute reference to warm up synths.
  • Transistor and retro: To emulate an old school feel.
  • Preamp: Often related to guitars and the world of microphones, preamps can be used on anything. They’ve been a tool of excellence for decades to give personality to sound by engineers.
  • Distortion: Pure distortion isn’t always pleasing and appropriate but if you control it properly, it will give beautiful textures and beefiness.

There are multiple situations where you could benefit from saturation in your mixing or sound design in order to alter the character of your sounds.

Pads & synths.

There’s nothing more exciting than rich tones, melodies, and very warm pads. More than often, I see people recording soft synths with no processing whatsoever; they’re really missing out on giving depth to the backbone of their songs. You can for instance simply pass them through a preamp, but my tool of choice for these is absolutely tape emulation (a personal favorite of mine in case you didn’t already know).

How: Start by pushing the saturation to a very high point and make sure it’s more than noticeable. Then adjust the wet/dry to a very low level where you can hear the incoming signal feeling almost clean but have the saturation be mixed in there. I usually find the sweet spot by going “oh, here I can totally notice the saturation” and then lower it by a few notches.

Tool: I’d suggest the Tape from Softubes or RC-20 Retro Color. Both are fantastic to shape your sound with shimmering textures.

One thing I really love is to use multiband saturation to get the most out of your melodies. This way, you can address the lower mids in a way while you bring out harmonics in the higher part of the sound. This can be done with tools such as Ozone 8, Neutron 2, and Melda’s PolySaturator.

Bass

Who doesn’t like a dirty, funky bassline? Low end with grit will always bring some excitement to a mix – especially in a club – this is something we’ve heard so many times in hip hop for instance. A very clean sine bass typically from an 808 has a certain warmth, but if you pass it through tape or tubes, it will give a lot of oomph. If you want to try it, I suggest you even try two instances of saturation to see where that goes. It depends of how much you want it distorted. The wet/dry will have to be applied to taste here. The producers of dubstep brought the game here to a new level.

How: Just experiment. Try to go overboard. Really.

Tool: SoundToys’ Little Radiator does marvel on basses as well as its cousin the Decapitator. For something more subtle but still robust, try the Steven Slate Virtual Preamp Collection.

Percussion

Saturation on percussion will automatically bring an old school feel from breaks that were really popular in the 90’s. The take on that, with Hip Hop (again), was to export the audio to VHS tapes or even tape cassette. The result is pretty badass. Experimenting outside of software is really fun, and I would encourage you to give it a try. One thing I like from doing this is to saturate only the tail and not the transients so that you beef up the overall signal.

How: Duplicate the channel you want to saturate and put saturation on the second one. Using MAX’s envelope follower, map it to the wet/dry of the saturator/exciter. Set the envelope to be flipped so that when a transient is detected, it will duck the knob making sure transient isn’t affected. Melda’s Polysaturator provides that option internally.

Tip: Add reverb and put the saturation after to get really fluffy crispiness.

Tools: Reels by AudioThing, Satin by U-He and Polysaturator once more.

Vocals

There’s nothing more beautiful than vocals that are lush and full. Treating vocals alone is an art in which I could get lost. I don’t want to get into that too much, but I’d like to invite you a bit of everything to see which one suits you best. Some prefer the tubes but other swear by the tapes. This is where Ozone can be a game changer, especially that you can do multi band processing as well as M/S.

Tip: Apply anything and everything from what’s explained above but start by doubling your vocals which will already do great things.

SEE ALSO : Tips to add movement and life to your songs

 

Tips on how to pick your EQs and use them (Pt. I)

People often ask me about my opinions on what the best audio plugins are, and there are no doubts that investing in quality EQs and compressors is one of the most important things you can do for both sound design or mixing. You can do pretty amazing things just with EQ and compression, but of course you need to understand your tools to make the best of them. In this post I propose some exercises and tips, as well as covering the main tools I have gathered through the last years and my thoughts on the best EQ plugins.

Types of Equalizers

There are many types of EQs and I believe some are more important than others. It took me a while to understand how to fully use them all and how to select the right one for specific situations. This subject is actually so vast and complex, I could make a series of multiple posts and I wouldn’t get through it. I’ll try to avoid being too technical and will explain them in simple terms so anyone can understand.

The way I approach EQs are based on different actions:

  • Corrective. Sometimes a sound will have part of it that will feel aggressive and annoying. I will do corrective by spotting where where it looks like it’s an issue and then cut. Corrective cuts are usually not too narrow (Ex. Q of 3)
  • Surgical. A resonance in a sound makes your ears hurt and that will need a very narrow cut. (Q of 6-8+).
  • Tonal adjustments. An EQ can be used to make tonal changes such as deciding if you want your track more beefy or more light by either boosting lows or highs.
  • Coloring. Some EQs aren’t transparent and will have a musical touch to the changes it makes. This will add some personality.
  • Valley cuts. The opposite of surgical, where the Q will be make the curve really wide. It makes very subtle changes, somewhat tonal, a bit colored and sometimes a bit corrective. Try it at different points on a sound and see it change without being able to really know what’s happening.

TIP: The human ear will hear a noticeable difference if you cut 3-4dB minimum. If you cut 6dB, it will be quite obvious.

The main types of EQ plugin categories are:

  • Graphic/Fixed Frequencies. Influenced by older models and the first EQ, the frequencies you’d have access to are fixed and won’t be changed. In many of those models, the frequencies are based per octaves but certain companies will have their own way of deciding which ones are used.
  • Parametric. One EQ that is very popular is the Q2 by Fabfilter which allows you to drop a point anywhere and then be able to shape how narrow you want to cut or boost.
  • Shelving/Band. This is a part of the spectrum that will be affected. For example, on DJ mixers, the 3-4 EQ buttons are basically shelves of frequencies that are altered.
  • Dynamic. This one is advanced. You can “order” a point of your EQ to react depending of certain conditions. For example, if you have a recording of a drum, you can order the highs to lower down by 3-4dB if the cymbals hit too loud. Very practical!

TIP: If you love the sound of analog, you might want to dig in Universal Audio’s suite that does emulation of classic pieces of gear. The fidelity of replication is absolutely mind boggling!

Now let’s make some associations regarding which EQ does what:

  • Surgical and valley cuts are mostly done with parametric EQs. This type of EQ will allow you to precisely identify the rogue frequencies and then cut or boost, in the way you want.
  • Corrective EQ can also be done with parametric but with graphical ones too. Sometimes a correction needs precision but sometimes, it can just be a way to realign the curve of the sound which a graphical EQ can do easily.
  • Tonal adjustments. This is done with shelving and band EQ.
  • Coloring. This is basically fixed frequencies, but if you look for analog emulation or EQs that provide a type of saturation, then you’ll also get some coloring and personality.

My favorite EQ plugins

Here are my thoughts on the best EQ plugins  that are precious tools to have in your arsenal. I’ve also included low budget EQs alternatives that are similar.

1. Fabfilter ProQ2 (Surgical, Valley cuts, Corrective, Tonal)

This plugin seems to have found it’s way in many producer’s tool kit mostly because it can pretty much do it all. From complex curves, mastering touch-ups to shelving tones and copying the frequency of a sound to apply it to another… the ways you can use this beast are so numerous that you’ll have to watch a bunch of tutorials to get all the hidden things it can do.

Budget Alternative: TDR Nova GE by Tokyo Dawn

2. Electra by Kush Audio (Shelving EQ, analog replica)

Not so known by the masses but this EQ is an absolute wonder to have on hand. I use it in every single mixes I do and the results are always amazing. A bit of a learning curve to understand as the GUI is a bit weird but even if you’re not sure of what you’re doing, it shapes the sound in a way that makes it pop out and warms it too.

Budget Alternative: RetroQ by PSP

3. BX_Hybrid V2 by Brainworx (Corrective, shelving)

I don’t think there’s any plugin that can do what this can do in terms of results. Not as versatile as the ProQ2 but where this one stands out is for how buttery it cuts in the sound, smoothing things out. When I have people studying mixing with me, I would always require them to buy this one as the very first EQ to have and use.

Budget Alternative: Voxengo Prime EQ

4. Passive EQ by Native Instruments (Shelving, correction, color)

This emulation of the famous Manley Massive-Passive EQ is a bomb EQ. I love to place it on a bus of all my melodic content and then smoothly shape it into something that magically turns organic and warm. It requires a bit of exploration but when you get your hands around it, you’ll always want to use it. I find it quite powerful for sound design as a way to warm up the lows.

5. F6 Floating band dynamic EQ by Waves.

I’m not a big fan of Waves as well as their aggressive tactics for selling but this plugin is a really useful one to have. As described above, with a dynamic EQ, you can tame some frequencies that are randomly happening. The problem with a static EQ is, you’ll be cutting permanently a frequency so if what you’re trying to cut isn’t always there, you might cut something that doesn’t need adjustment. This is why you can have more control with a dynamic EQ. This one is also really easy to use if you’re familiar with the concept and the fact that you can use it in MS makes it really versatile. Not as easy and fancy looking as Fabfilter’s but it does more, in other ways. Wait for the price to fall but you might get it fro either 29$ to 49$ if you’re patient enough.

In the next post, I will go more in detail with my favorite plugins and will also explain certain ways, in details, for how to get the most of them.



SEE ALSO :

The best EQ plugins and various EQ’ing tips (Pt. II) 

My Music Production Methodology Pt. III: Depth and spatial shaping tips

This post about music production methods is an important one. In the group I work with on Facebook, I give feedback to people and I’d say that while for many, the part they strugg le with the most is to nail down a proper mixdown, and for the majority there are issues with the stereo field. I have a bunch of tricks that can help turn a 2D pattern into a 3D realm to get lost in. Let’s start by discussing a few things regarding making music 2D, and then how you can slowly shape it.

One thing that is essential for music to sound clear, loud, and powerful in a club is to have the majority of your sounds “in mono”, or in engineering terms, to have your mids solid. This is why many people will tell that doing a mono test on your mix to see if everything is heard is a good way to know. Why? Because if the sounds are moved randomly around, they might phase with others, which will end up cancelling out once in mono.

While this might sound like voodoo magic if you make music as a hobby, you can drop a tool into your DAW to make the signal mono so you can check. (hint: in Ableton Live, it is the Utility effect that will let you do that)

Ableton’s Utility tool

This is why you want your low end (under 100hz) to be in mono; to make sure there are no conflicts and that it will be sounding fat and strong. Again, in Ableton Live 10, you can activate the “Mono bass” option on the Utility tool.

Why I’m saying this is clear and simple: depth is a fun thing to have on your music but if you go too crazy with it, it might end up being a problem. So, first and foremost, when you program your patterns and music, try starting in mono. Make sure everything is heard and clear.

Once you have created the arrangements and are pretty much done but before you get to mixing, start spreading your sounds around to occupy the space in front of you. You don’t want to have everything in the middle, it will feel narrow and lifeless. There are multiple ways to get this done and it goes a bit beyond than simple panning which might be a bit boring. (Note: many mixes I get have everything in mono!)

Tips to give your mix more space

Mid/Side is a great way to use space in a mix, but is often misunderstood.

Here are a few tips to give you mix space and life, and if you google this topic, you’ll find multiple others too:

  • If the sound/sample is in mono: Try doubling it by duplicating the channel a few times, then pan and experiment. In pop, soul, R&B, the producers often do that and have up to 4 duplicates, spread around and or pitched to different tones to give sounds textures. You can use a VST Doubler to do the same but there’s something exciting about doing it manually. Keep in mind, a clap is actually 4 layers and so on for your percussion. Try to create something wild.
  • Panning around your sounds can do but it will feel bland if you don’t couple it with a quality reverb. Even at very low levels, a reverb will create space around the panned sound. This is why I group percussion into families (ex. all organic, all metals, all wood, etc) then have a reverb per family, not per sound.
  • Use stereo effects: These will be super useful to help things around and for instance an auto-pan will help give life and movement. These include: chorus, delay, phase, flanger and wideners (of course). These should be applied to a sound, not a family. Only one of these effects per song to avoid issues.
  • Quality reverbs: as described above, a quality reverb is a game changer. Stock plugins are never as good as a whole team that work on making something special. For instance, all the plugins from Valhalla are now recognized as some of the best in the industry and for a reason, they sound just as good as some hardware units. Tip Top who make modular synths has licensed their reverb for their z-Dsp 2. If you can, always go for convolution reverb for your music and use only one, in a AUX/Send. So if you really a 3D sounding song, keep in mind that a reverb will do 80% of the job. The rest is about lowering the volume of certain sounds to give the impression they’re further away. Also, filtering out the low can give that impression. Mixed with a quality reverb, you will have a lovely space.
  • MID/Side: This is one of the most misunderstood aspects of mixing because it’s hard to really understand it. Keep it simple, this term refers to how your space is shaped as what you have in front of you is the mid and the sides are located where are the speakers/monitors are. If you misuse the sides too much, it will make your music phase (you’ll hear it in the mono-check). But it’s really interesting to play with the Mid/Side (Aka MS) of your groups to open them, a bit.

Last tip: Low end should always be in mono and I usually make sure that some part of the melody is also, while it can partly be spread around. The main hihat and percussion should also be strong in the mid but then you can have support sound of the same family be spread around to give room.

 SEE ALSO :   The “sous-chef” experience 

The rule of thirds in arrangements and mixing

One of my favorite aspects of music making is to use proportional ratios regularly. While this seems perhaps counter-productive when compared with the artistic side of producing music, I use it to eliminate a bunch of technical roadblocks that emerge in the process of decision making. Because making decisions can sometimes end up in roadblocks, you can use this technique as a general rule that you always refer to whenever you have to.

Let me explain how this rule of thirds can give you wings.

The first time I familiarized myself with this concept was when I used the iPhone grid to take pictures. I had read that a tip to take better pictures was to use that grid to “place” your content. To compose your photos according to the rule of thirds, you must imagine your photo divided into nine equal parts using two vertical lines and two horizontal lines. For example, the square in the middle should have the subject of your picture, so it’s perfectly centered, it is also recommended to have something like a detail where the lines cross.

When I practiced this, I immediately saw a parallel with musical arrangements. For instance, any song will have three distinct sections when it comes to the story line (intro, main section, outro). Where each section meets, there must be a pivot, an element of transition. When I work, I always start by dividing the song into equal thirds, then, I’ll divide again so I have nine sections total. Starting with arrangements, they have equal parts, but this will then change as I dive in details of arrangements; some of the “lines” of the grid will be moved around.

TIP: Use markers in Ableton and give names to each section.

What you want in arrangements, is a good balance between expected and unexpected elements.

Using the rule of thirds helps achieve this balance: while you center the main idea of your song right in the middle of your timeline, you can have an overview of where the listener will sort of expect something to happen. Then you can play with that. Either you give the listener something where they expect it, or move it slightly to create a surprise.

The rule of thirds can also help in a few other aspect of your work:

  • Tonal balance: We covered this topic recently and this means splitting your song’s frequency range in three areas (low, mid, high). You can use a shelving EQ to help you with this or you could re-route your sounds into three busses that are per-band. This will allow you to control the tone using the mixer of your DAW. In this case, by simply splitting in 3 bands, you minimize the work of deciding which tone to take.
  • Sound design: We’ve discussed sound design before but I’d like to pinpoint how you can apply the rule here. For instance, think of how a kick is made. There will be the mid punch of the kick, supported by a bit (or a lot) of sub, then a transient on top. Most of my percussion are layered with three sounds. One will occupy most of the space, another will add add body, and the last one will be adding transients or texture. I also find that shuffling with three sounds often makes it difficult to get bored of a sound. The rule of thirds – where you have sound variations – pretty much always works for me. The question to ask is, is there a balance or is there a dominant?
  • Mixing: When I do a mixdown, I always have multiple categories for my sounds. Part of this is that – since I really don’t want all my sounds to be front forward – I’ll have some that are intentionally low, others in the middle, and the loudest one are the ones that are meant to be right in front of me. It’s very soothing for the ear to have these three areas of sound levels because it help creates dynamic range and creates an acoustic feeling of tangible spacing; putting some sounds in the back will give support to the ones who need to be heard. Just like sound design, if you always keep in mind that you’re layering in thirds, this can give your mixes a lot of depth.
  • 1, 2, PUNCH!  This is a technique that I’ve learned in my theater classes, consisting of creating expectations to then mess with the expectations. Basically, you want to introduce a fun sound, and in the pattern introduce it again later, but at the exact same place, then the listener will expect it to come a third time. This is where you can surprise them by either not playing the sound or by bringing something different. Simple, but very effective.
  • AUX/Sends. This might sound a bit much, but I limit myself to not use more than 3 aux/sends. I find that an overflow of effects will make your song messy and unnecessary busy. One of my starting templates has only three sends by default: reverb, delay, compression (or another sound modulation effect such as chorus).
  • Stereo spectrum. I like to see the placement of my sounds in a grid of 3 x 3 zones. It will go as: right, middle, left then, low middle and high. Some of the main sounds will have to be right in the middle (ex. clap, melody), some in the low-middle (ex, bass) and then some elements that are decorative, around. A healthy mix is sort of shaped like a tree: middle low should be strong with bass/kick, then middle left-right and middle-middle are strong too, then some content in the middle-high, with a little presence in the high left-right. You want to be very careful with the zones of low left or right as this could create phasing issues. You want your low end to be in mono, therefore, centered.

There are other examples, but these are the main ones that come to me!

Important Music Production Principles

As a label manager or as a teacher who regularly gives feedback (join our facebook group if you’re interested to participate!), I’ve realized I don’t listen to music like the average person; I listen for certain music production principles. There are a number of things that will get my attention that most people won’t really notice; I’m listen for a number of principles that make – according to my tastes – music that feels full, mature and deep. Many labels are after music that will sell, but I’m more interested in music that innovates, which to me comes from the design work involved in the song.

Why innovation first? I prefer treading new ground than releasing something vanilla. It might not pay, but the delayed gratification is more powerful and I can attract creative minds, which are my favorite kind of people.

I was reading about visual design and I was pretty interested in how it’s similar to audio production. I’ve compiled some basic music production principles that applies to both the audio and visual spheres.

Balance

Balance can be achieved in a variety of ways: from the stereo field being occupied, to the mid/side balance, or the balance between low end vs high end. I like to hear how balance has been designed and exaggerated – the emphasis of a zone that moves towards another. I want to feel the artist is playing with balance, or shows that he can propose balance shift during the whole timeline of his/her song. Balance is to me, the umami of audio, and I want to experience something that feels full.

TIP: In the final stage of arranging, try to check each zone (left/right, mid/side, lows, mids, highs) to see how they relate to each other.

Contrast

This one is a bit tricky. How do you apply contrast in audio? It can be in how you select your sounds for instance. Perhaps having a number of sounds that have very sharp attack compared to others that are soft. Maybe a contrast in volume, compression, harmonics or dull vs very detailed. As you bring in a number of sounds or melodies, think of how each of them can be different. This is useful as it can broaden up your palette of sounds or have them evolve into something else. One of my favorite contrasts is between textured sounds vs some that are smooth.Another type of contrast that I love to hear is a distinction between bold and subtle on certain elements.

TIP: Try to import two samples at a time that are very different. Ex. 2 claps, one bright and the other fat, then go from one to another to create contrast.

Emphasis

Which element that should grab your attention first? This is, in design, the focal point of your artwork and in audio, putting one sound forward will have the listener engage with it. This is usually in the mid frequencies, right in front of you. It’s rare that your key element will be panned to the right and if so, it will be really confusing to get something there through the entire song. A good way to create a focal point will be to decide what will be in front and what’s in the back.

TIP: Use one main element in mono and EQ the mids up to push it front forward. Group all sounds to be put in the back where you slightly remove mids in mid/side mode.

Movement

This one is all over this blog and if you haven’t consulted some of the past articles on how to get more movement in your tracks, I invite you to check some out. Movement is one of the most important parts of music arrangements. Movement is life, nothing less. When music is static, it feels dead, dull, redundant, synthetic in a bad way, and terribly alienating. You need to have your sound move in the space, in the stereo field as well as up and down – there are so many ways to achieve movement.

TIP: EQ, auto-pan, compression, filters are your best friends for movement.

Pattern

Ideas and hooks always are dependent on a precise pattern. Next time you listen to your favorite song, try to determine the pattern of the song. Sometimes it’s simple, sometimes it’s multiple patterns that are layered. Now, the pattern is more than just the percussion; it’s the order of elements that are also reappearing throughout the song. In techno, there’s a micro pattern (eg. within one bar) that is part of a much bigger pattern. Decoding it is a bit like reading morse code. But one of the key points of patterns, as explained by Miles Davis, is understanding the importance of silence because that’s what creates them.

TIP: When creating a pattern, try adding random additional ideas by using Ableton’s MIDI effect, “Random.” Having a developing pattern can do wonders to the timeline of a very simple song.

Rhythm

This is the perfect follow-up from the pattern principle as they go hand-in-hand but are slightly different. I like to see the rhythm as everything that amplifies the flow of the pattern you created. Groove templates in Ableton are particularly tied to rhythm as well as swing. But importantly, one thing to understand is the transition from section to section, as well as what’s regular vs irregular. You can have a very simple, almost boring pattern but with a great rhythm, you can make it very engaging for the listener. However, this doesn’t work the other way around; a poor rhythm will turn a great pattern to garbage.

TIP: Try to DJ your tracks at different stages of production. You can stretch your idea/concept to 5-6 min and see how it feels, mixed as a DJ. Of course, mix it with something you love the rhythm of and see how yours fits in.

Unity

This is the final touch to a song; “making sure all elements feel like they’re working together.”  Sometimes I hear music and I feel there are a few sounds that don’t fit in at all. Perhaps this has happened to you and you’re not sure exactly what it is. Here’s a quick list of things to consider while developing a new idea:

  • Make sure all melodies are in the same scale or in compatible keys.
  • Use the tuner to make sure the most important elements are in key.
  • Always have some sounds that are in the “call/answer” relation with some other.
  • Certain sounds should either be working together or complementing one another (eg. played at same time or shuffling).
  • Use a global swing/groove for main sounds.
  • Stick to just 1-2 reverbs for creating a common space.

Final principle: Make your work understandable, long lasting, and detailed

Here’s a personal motto that I apply to the analysis of my own work:

  1. “Is this song understandable?” If I ask a person to sing it, can he/she relate to one element?
  2. Is this song based on a trend or will it age well?” I like to analyze songs that I still love after 20 years and try to see what I still love about them. I then try to apply concept with my current knowledge. It can be a concept or a technique too.
  3. “Did I cover all details?” The last round of arrangements I do will be to cautiously pass through my song, one bar at a time to see if I am aware of all details, such as volume, tails, attacks, position, etc. If I don’t do that, the song isn’t done.

I hope this helps you to perceive your music differently and create your music more efficiently!

Transient Shaping

In this blog, I’ve already discussed many ways of playing with your track to create new textures and variations and how to keep your sounds interesting. I’d like to discuss another way of colouring your music: transient shaping; something that can completely change the way a track sounds and feels, depending how you shape your sounds.

To experiment with transients, we will need to play with certain features of Ableton Live which can be very powerful. Alternatively, you could also invest in a type of plugin that is in the category of “Transient Shapers”; there are many out there but some of my favorites are the MTransient by Melda Production and Transient Shaper by Softubes. Both offer quality results at a decent price.

Firstly, if you’re not familiar with transients, they usually consist of the beginning of a sound/sample. If you’re familiar with the Attack-Decay-Sustain-Release (ADSR) envelope of the synthesizers, the attack would be generally manipulating the transient. Sometimes its fast and strong, or other times, it’s slow and smooth. For a kick that punches, you want it to be pronounced and snappy. If you’re after that specific feel, then the transient shaper will really be interesting for you. A plugin will allow you to make the transient more apparent or make it quieter; generally you will also be able to control the sustain of the transient. Sometimes you might want your transient to snap but the rest of the kick to feel quieter; a transient shaper plugin will be able to do that with 2 knobs. I have multiple versions of these types of tools and use them daily – it’s quite captivating what you can do if you exaggerate the attack of sounds which don’t have any transient at all.

In Ableton Live, you can also have fun with a feature integrated in the sample’s detail view. Let’s have a look at how you can manipulate it and how you can have fun with it…

First take a loop sample, and duplicate it in another channel.

The on the duplicated loop, make sure you set up your details like this. Now turn down the percentage of the transient.

You’ll notice that as you lower down this box, only the transient will remain and the rest of each sounds will disappear. You’re basically trimming each sound to keep just the beginning of it, which is the transient. The new channel can be leveled up and layered with the other: you’ll now notice the transient is louder and you now have certain punch added if there was not enough originally.

If you flatten or consolidate, you’ll get a new view:

See the difference and what we removed? By layering the beginning, you’re giving more punch.

Tip: try it with a kick loop or a hihat loop.

Now your fun really has just begun!

Here are a few suggestions to try for pushing your sound design even further:

  1. Control/lower the transients of the original loop with a compressor. If you set a compressor with a fast attack, it will control the transient. Play with the release to really tame it down.
  2. Add a reverb or any effect on the transient channel alone. This is really cool because the effect can either affect the beginning or the end of the sounds. I like to put reverb only on the sustain while leaving the transient dry, which gives more precision to your percussion instead of having them lost in a pool of reverb.
  3. EQ the transients to keep only the high end for sharp precision or just the mids for more oomph.
  4. Side-chain the transient with the original sound. Experiment with this one and you’ll achieve some fun results!
  5. Compress both channels by grouping them.

Feel free to share your thoughts about transient shaping!

Adding life to sounds: movement in electronic music

Creating movement in electronic music

One of the most misunderstood concepts in electronic music is movement. By movement, I am referring to the way that each sound constantly evolves throughout a song. I was once talking with someone who is very into modular synthesizers and he was saying that he cannot stand recorded sounds such as samples because according to him, those sounds are “dead”. With modular synths a sound can be repeated for minutes and it will never be exactly the same because the hardware components constantly give the sound slight variations. A recorded sound is frozen just like a picture. Since we don’t all have the luxury to own a modular synth, let me explain how we can use software tools to make sounds feel “alive” and develop some movement in our own electronic music.

First, let us agree that movement in electronic music is about having some elements that are in “motion”. There are a variety of different ways to create that feeling:

1. Changes in volume (amplitude)

Volume change in percussion are often associated with groove and swing. Both can alter the volume of the sounds. That said, you can apply a groove template not only to percussion, but also to melodies and basslines. If that’s not enough you can also use the midi effect velocity which can not only alter the velocity of each note, but in Ableton Live it also has a randomizer which can be used to create a humanizing factor. Another way to add dynamics is to use a tremolo effect on a sound and keep it either synchronized, or not. The tremolo effect also affects the volume, and is another way of creating custom made grooves. I also personally like to create very subtle arrangement changes on the volume envelope or gain which keeps the sound always moving.

In general, using LFOs – such as what is offered in Max patches – can be used to modulate anything, and they will automatically create movement. For each LFO, I often use another LFO to modulate its speed so that you can get a true feeling of non-redundancy.

Tip: Combine the use of LFOs and manual edits and then copy sequences until the end of the song. I suggest you try stepping out of 4/4 and regular blocks structure to step out of a “template feel.”

2. Filter

Another great way to create movement is to have the sound always changing its tone. Using a filter in parallel mode is a very efficient way to create colours. The important part is to make sure that both the frequency and resonance are constantly in motion by using either LFOs or envelopes. By being in parallel the sound always appears to be the same but will have some added body to it because of the filter. What many people don’t know is there are different types of filters, so you can try different types of filters into different send channels and then your song will feel like its moving. While filters are great for subtle changes, you can also do the same trick with an equalizer but still in parallel. Adding an envelop on the filter so it detects incoming signal and change the the frequency is also a very nice way to keep things organic sounding.

Tip: Try comparing how a Moog filter can differ from any regular ones.

3. Textures

Background textures or noise is another great way to emulate analog gear. There are many ways to do that, but the one that I recommend is to get a microphone for your iPhone and then record a part of say, your next visit at the coffee shop or restaurant, or even in your house where we don’t realize that there is still a very low level of noise. Adding that recording at low volume to your song automatically adds a layer of every evolving sound. if you want, you can also convert certain noise into a groove pattern which creates a form of randomization on your sounds. Some high quality effects such as saturation used on certain sounds will add a form of texture that prevents your samples from sounding stale.

Tip: FM modulation on a filter or oscillation can create gritty textures.

4. Stereo and Panning

For this point there are different effects that play with the stereo image and – while you should be cautious – it’s good to have at least one or two sounds that have these kinds of effects. Some of these types of effects include of phaser, chorus, flanger, delay, reverb and auto-pan. They can all give the sounds movement if the modulation is unsynchronized and if the wet/dry is constantly being slightly modified.

Tip: Just be careful of what effects you use as overusing can create phasing issues.

5. Timing

A sound’s position in a pattern can change slightly throughout a song to create feelings of movement; a point people often overlook. This effect is easier to create if you convert all of your audio clips to midi. In midi mode you can use humanizer plugins to constantly modify the timing of each note. You can also do that manually if you are a little bit more into detail editing but in the end a humanizer can do the same while also creating some unexpected ideas that could be good. Another trick is to use a stutter effect in parallel mode to throw a few curve balls into the timing of a sound every now and then.

Tip: turn off the the grid locking in the arrangement section to intentionally be imprecise.

 

SEE ALSO :   Dynamic Sound Layering and Design