Tag Archive for: tips

Arpeggios Technical Dive

In the vast world of music, arpeggios have served as an integral element in composition, bridging the gap between harmony and melody. By understanding its roots, one can appreciate its profound effect on modern electronic music.

Origins of Arpeggios

An arpeggio, derived from the Italian word “arpeggiare,” which means “to play on a harp,” refers to the playing of individual notes of a chord consecutively rather than simultaneously. Historically, arpeggios have roots in classical music. Classical guitarists, pianists, and harpists frequently employ them to express chord progressions melodically.

Functionally, an arpeggio can convey the essence of a chord while providing movement. It serves as a bridge between harmony, where notes are sounded simultaneously, and melody, where notes are played sequentially. This bridging effect imparts a richer texture to compositions, allowing for a smoother transition between harmonic and melodic sections.

 

Arpeggios in Electronic Music

 

With the evolution of electronic music, arpeggios found a new platform for exploration. When synths started to be commercialized, they more than often included an internal arpeggiator. Even smaller options like Casios had some simple one. Synthesizers, with their ability to shape and modulate sound, provided the perfect tool to push the boundaries of traditional arpeggios.

 

  1. Synthesizers and Arpeggiation: Many synthesizers, both hardware and software-based, come with built-in arpeggiators. These tools automatically create arpeggios based on the notes played and parameters set by the user. Parameters like direction (up, down, up-down), range (number of octaves covered), and pattern (the rhythmic sequence of the arpeggio) can be adjusted to achieve specific tonal effects.
  2. Arpeggio Plug-ins: Beyond built-in synthesizer capabilities, there are standalone software plug-ins dedicated to advanced arpeggiation. These tools offer extended control over how the arpeggio behaves and can be integrated into digital audio workstations (DAWs). They often come with pattern libraries, giving producers a starting point which can be tweaked further.
  3. Sequencing Arpeggios: Sequencers, commonly found in drum machines and DAWs, allow for the programming of notes in a specific sequence. This technique offers a manual approach to arpeggiation, allowing for unique and intricate patterns beyond the capabilities of traditional arpeggiators.

For many people, when musicians would first test a synth, they would at one point test the arpeggiator. In the 70’s until the 90’s, electronic music had more than often, some arpeggiation used. It could be for the bass or for the main hook.

The Impact on Electronic Music

 

Arpeggios in electronic music often lend rhythmic drive and melodic structure, especially in genres like trance, techno, and synthwave. The repetitive nature of these genres marries well with the cyclical patterns of arpeggios.

 

Additionally, with the sound-shaping capabilities of synthesizers, the tonal quality of arpeggios can be manipulated. By modulating aspects like filter cutoffs, resonance, and envelope parameters in real-time, arpeggios can evolve and transform throughout a track, adding dynamic interest.

A fascinating aspect of electronic music lies in the observation that many of its melodies are constructed from sequences which can be effectively replicated using an arpeggiator. This isn’t mere coincidence. Electronic music, with its repetitive structures and emphasis on timbral evolution, often favors linear, cyclical melodic patterns. An arpeggiator excels in this realm, offering a systematic approach to crafting these melodies.

Consider classic electronic tracks: many feature melodies that iterate over a set pattern of notes, evolving more through sound manipulation (like filter sweeps or resonance changes) than through note variation. This approach provides a consistent foundation upon which the rest of the track can evolve, allowing other elements, like rhythm and harmony, to play more dynamic roles.


Parallel and Modulated Patterns

 

1. Parallel Arpeggios:

  • Method: Start by setting two arpeggiators with the same note input but adjust one to operate in a higher octave range than the other. You’ll achieve a harmonized melodic pattern where both arpeggios play in tandem, producing a richer sound.
  • Experiment: Tweak the rhythm or gate length of one arpeggiator slightly. This introduces a phasing effect, where the two arpeggios drift in and out of sync, creating rhythmic tension and release. Another fun experiment to try would be to create a macro from an arpeggio and then you have a a tool that is also parallel. Make sure your receiving instrument is polyphonic because there will be many notes. I’d recommend trying the arpeggios on different speeds with a pitch/octave modifier so they play notes from different octaves.

 

2. Side-by-Side Arpeggios Modulating Each Other:

  • Method: Use one arpeggiator’s output to modulate parameters of a second arpeggiator or its associated synthesizer. For example, you can set the velocity output of Arpeggiator A to control the filter cutoff or resonance of Arpeggiator B’s synth.
  • Experiment: Introduce a slow LFO (Low-Frequency Oscillator) to modulate a parameter on Arpeggiator A (like its rate/speed). This will cause the modulations impacting Arpeggiator B to change over time, introducing evolving dynamics to the piece. I like to have the first Arp to be slow and random and the second one, faster, higher notes.

Power user super combo

 

 

 

 

 

 

TIP: Arpeggiators become super powerful if you use an Expression Control tool so that you can modulate the gate, steps, rate and distance. This will spit out hook ideas within a few minutes of jamming.

Plugins

There are multiple plugins that can be good alternatives to your DAW’s regular arpeggio. It’s always good to have 3rd party plugins so you can step out of the DAW’s generic sound.

Stepic

This is definitely inspired by the various modular options existing. They’re all regrouped under one plugin that does a bit what many different free tools do like Snake, but of course, the played root note will influence the sequence, which something like Snake doesn’t. Stepic is often used online in Ambient making tutorials. It is great for creating generative melodies and psychedelic melodies.

 

Cthulhu

Everything the guys of Xfer do, is always solid and well thought out. This one doesn’t disappoint. With so many presets existing out there, you can also randomize and quickly tweak your own sequence.

 

Seqund

AlexKid has done multiple tools for Ableton Live and each of them found their way into so many people’s workflow, either to start an idea or to have a quick placeholder. This one is similar to Stepic in a way, but just a different workflow. The UI is cleaner and easier to read than Stepic, making it a quick tool for adding decorative melodies or simple basslines. The randomizer has nice options for controlling its results.

 

Conclusion

From their origins in classical expressions to their modern applications in electronic music, arpeggios have remained a compelling tool for musicians. Through synthesizers and plugins, electronic music producers have a vast palette at their fingertips to experiment and innovate. As technology advances, it’s certain that the use and evolution of arpeggios in electronic landscapes will continue to captivate and inspire.

What Makes A Difference Before Mastering

I often explain to clients, almost daily, that a good master starts with a solid mix. Therefore, I thought I’d list what actually makes a difference for me when I get files to process. This article will also talk about certain things that producers do to their tracks that they think will make the job easier for me, but really, they just end up making me work hard.

 

I would like to start with what is obvious to me but seems to not be for many clients, which is about how people listen to their music. Many times, people send me a song that will have issues related to how their studio is set or because of a lack of understanding about how their studio translates to the outside world. If the way you listen to music in the first place influences how you perceive it, it is clear that you’ll misunderstand the song you will get, once mastered. When I get feedback from a client that their song has a tone-related issue (too bassy, too bright), I will always reply with “compared to what?” Because in the end, that’s what it is – a comparison. You will always have some people comparing you to other things and the definition of perfection is extremely arbitrary. That’s why it’s always a good idea to provide me a reference. 

 

Every mastering engineer has their own touch, therefore the idea of working with a mastering engineer is directly related to how you like their work. The engineer will work with a definition of what they think works best for what you have, related to a genre, within a range of technical points that will make the best out of it. Therefore, the first thing that can make a huge difference is trust. When you have your song done and need the last touches, there can be some wild differences, but in the end, it’s about communicating your vision and hoping the engineer can manage to do it. I often get people sending me files and asking me if they’re ready for mastering to which I always tell them that the best way to know is to do tests, where you send me a version to master, and I see how your mix translates after the process. This is, to me, a huge step to build trust.

 

Obvious Technical Details

 

These have been discussed inside out. They’re also covered on pretty much any mastering sites or forums. However, I still get files prepared wrong. Some people forget, but there are also some people think they know better too, so let’s see once more, here:

 

  • No compression or limiting on the master bus: If you have your gain staging done properly, there will be no need to really have compression on the master as your loudness and density will be already solid. If you want to glue things all together, leave it to the engineer. If you want a certain vision, bounce a home master as a reference. A limiter is useful during production in case you get hot and push things a bit. Still, for mastering, you need to remove that tool because otherwise, it creates some intense processing on your transients and density that will be problematic for mastering. Note that many use limiters within the mix itself, either on the low-end buss or percussion, which is ok but sometimes can cause distortions too. Also, be careful with saturation on the master. Many people actually mess up their mixes that way – a 3-5% of wet factor of saturation may feel huge once mastered, so treat it with care.

 

  • Headroom: The usual requirement engineers will ask for is -6dBfs but nowadays, I’m cool with -3dBfs too, as long as there’s no limiting on the master 0and the transients are healthy looking.

 

  • Resolution and sample rate: This keeps changing but I find that the bare minimum is 24bit, 48khz. Some people send files with better resolution than that but on my side, I run most of my sessions in 96khz to get the best headroom so I can deal with pretty much anything. Of course, files have to be in stereo, wav or aiff.

 

These points are easily handled by most clients. If you aren’t sure of one of these points, you’ll easily get answers on forums or straight from a Youtube tutorial. I would say I often get files that don’t meet those requirements, but often we can easily fix them.

 

Average Level Technical Details

 

This is where things get messy. It’s what I would call, mostly average level. Meaning that new producers will have some difficulty with these but if you’ve been making music for a few months, and finished a bunch of tracks, you’ll probably run into some of these issues and the lack of experience might lead you into trying different things. It takes time to really pinpoint how a mix will translate after a mastering. It comes down also to your pick of engineer, their aesthetic and communication with you. Of course, the more you work with someone, the more they’ll know what you expect. Most of my recurring clients never ask for a revision because it comes as they want.

 

Advanced Technical Details:

 

  • Loudness. This is where many people are confused. There’s a difference between the peak loudness and the density of a song. I’d encourage you to get a loudness measuring tool and look into the LUFS indicator. 

 

If your track is at t -6dB, it means that I will need to add 6 dB gain as I’m trying to get it as close to 0dB as possible. In order to do this, I will need to boost the density to match other songs on the market. If your song is close to 0dB without density measures, it probably means that it’s not loud enough. 

 

There are multiple ways to boost the density such as saturation and compression. Sometimes, people wonder why their song is compressed and the reason is always about matching loudness. We can’t simply boost the gain, that won’t be enough. That said, people need to do their gain staging properly. I can’t explain in this post how to do it but there are multiple tutorials online for that

 

So, in the end, I always prefer a mix to be roughly around -15LUFS ideally. I can do with less but you’ll have to accept there will  be a pretty steep difference.

 

  • Stereo width. Most people I know love their song to be returned with a nice width, as they love to be wowed. If it’s too wide, there will be a loss in punch and assertiveness. Usually, people are pretty ok with this but there are a lot of people who get addicted to plugin that use width and clients might boost the sides a bit too much. I often have to rebalance the signal between mid and side. It’s not much of a big deal but if, in some frequencies, I notice the sides are too loud, I’ll have to control that because it might be a sign of phasing. That is one type of issue that clients have a hard time spotting because it requires experience or good monitoring tools. 

 

  • Saturation. Every now and then, I’ll have clients who will push saturation a bit much without knowing that loudness matching will multiply that crunch by much. This results in weird clipping, distortions, noise, nastiness. Some people are ok with that but some need to redo their mixes to find the sweet spot. My general suggestion is to add saturation until you hear it and then dial back a bit. If you hear a lot of the saturation and your gain staging isn’t right, I will boost your track much and the saturation will be boosted a lot.

 

  • Noise floor. If you record from synths and analog gear, there might be some noise in the background. You need to record as loud as you can so the noise doesn’t get boosted by your gain staging. It often happens that people record with lots of noise in the back and when I boost, then it gets amplified. So, be careful.

 

  • Effects. In general two red flags occur here: overuse of phasing, MS or reverb that is way too loud. Often clients have to fix the reverb and send me a new file. Long reverb are super tricky and sometimes I suggest to use ducking on it so it doesn’t mess up the transients or make the entire mix messy.

 

  • Compression. This one is tricky. Sometimes people will put compression on the master to glue everything together but I don’t recommend doing that so much. This is something I like to control myself since I do the final gain staging and overall adjustment of that matter. People who use compression as gain on the master or all over the place will end up having a track with exaggerated tails on the sounds that are supposed to be a bit shorter, which can have dramatic effects on the bass or the kick for instance. If they bleed into one another, the low end will be mushy and messy. Snappy kicks need space to be cutting through and if the mids are also bleeding all over the place, there will be a lack of precision. Overall, compression is useful yes but with some moderation, unless you want a puffy, really inflated-sounding track.

 

  • Samples. The higher the quality of your samples, the better the mix you might get. In essence, using mp3s or youtube ripped sounds will sound lofi and that might be exaggerated, once more, in mastering. This is where aliasing and weird digital artifacts can make a pristine-sounding song into a harsh one. When I refer to quality samples, I not only relate to the bit rate, but also for a good balance of density, clarity and precision. Samples with sibilant resonances and sharp transients also can be hard to control in the mastering process.

 

What usually really helps and will make a huge difference:

 

  • Resonances removal: If you think that I might do gain staging, you’ll quickly see how they’ll escalate into a harsh ringing if you have any resonances in there. Removing resonances isn’t something you easily learn but once you start being aware of the impact they have on your mixes, you’ll want to handle them right in the sound design part. While I can control them with my mastering EQs, there’s nothing like having a clean mix to start with. I had a try at this Reso EQ and it turned out to be quite solid. There also the MAutoEQ by Melda that detects resonances and lets you cut to taste.

 

  • Transients taming: There’s nothing more annoying than harsh transients on big sound system or at high volumes. There’s a difference between snappy and harsh, sometimes people don’t really realize until they hear the master. While I can control things, if you do the most cleaning on your side, your mix will sound stellar. One of my favorite transient shapers is Impact by Surreal Machines. But if you want something that is game changer, you can go high level and get the Oxford Transmod.

 

  • Proper leveling: This is the mixing 101 of all tips. Not much you can do else than practice, take breaks, listen to references but if you get your levels right. This always is a win in mastering.

 

  • Sidechaining, unmasking: If you have multiple sounds that are in the same frequencies, it will soon head into a masking issue territory. You can spend some time cleaning the frequency of one to let the other be heard but the fastest fixing method is side-chaining. Using TrackSpacer is always clean and fast, but lately, the new Neutron 4 has proven to be quite amazing. It also has a lot of other practical tools and it has been in every songs I mixed lately.

 

  • Proper gating: Gating is often misunderstood but it is a technique that will bring punch, clarity, dynamics to drums or anything with tons of details. It also resolves masking sometimes, clean noise floor and avoid muchiness all around. If you don’t know much about it, go check tutorials!

 

The last stretch of points is what I’d consider advanced but those are actually the ones that I will have the most trouble with because they should always be handled in the mix. The cleaner your mix, the better the master.

 

How To Prepare To Make Music

When I was 10 I was invited to be part of the track and field crew at my middle school. While I always considered myself a proficient runner, one thing that we started to do more was stretch. At first, it seemed like a huge waste of time, since all I wanted to do was run. Instead, we were spending all this time doing these exercises that, to me, had nothing to do with running. However, after months of stretching, I started to realize that I was getting significantly faster. This is because I was warming up. Just like you have to warm up to prepare for running, the same goes for music. In this post, we’re going to discuss warm-up techniques that help you prepare to make music. 

 

Your Tools Aren’t That Important

I’ve talked about this frequently in previous articles, but it deserves to be reiterated. In music production, clients often think that they can buy all the equipment they want, and somehow, miraculously, they will be inspired to create. However, more often than not, they get stuck and the most productive thing that happens is my client cleans the dust off their wall of useless gear. 

Just buying equipment doesn’t do anything if you’re not intimately familiar with it. Imagine buying a nice guitar and thinking you can play it right away despite not knowing how to play guitar. Sounds ridiculous, right? Of course, it does! It takes time to learn a new instrument. It takes frustration. It takes commitment. However, sometimes they do know how to use this gear, and still, nothing happens. More often than not, their problem is they don’t know how to prepare to make music. And just like I was warming up for track and field, so must a producer. 

 

Come Up With Your Own System When Preparing To Make Music

Now people think there is a uniform way to prepare, however, everyone is different. The mind is not a quadricep, where there are standardized stretches that make it more functional. So what we do in coaching is to come up with a system that works for them. I start with figuring out what their current habits are because one thing we do know is that what they have been doing isn’t working. 

So once we figure out what they have been doing it’s time to figure out a system that works for them. Like I said earlier, everyone is different, so everything I’m about to make is a suggestion, not a catch-all. 

 

Actively Listen To Music To Prepare To Make Music

a photo of preparing to make music by actively listening to musicThe first thing producers can do is listen to music before they make it. This might be a huge “duh” statement, but how many people actively listen to music? How many people come home, crack a beer, put on a record, and then just sit there, doing nothing else, except engaging with the music? 10%, maybe? However, it’s this 10% of people who have set themselves up for success if they are music writers themselves.

When listening to music actively, it’s best to think of it as a reference track, in a way. Listen to the song over and over again. Note the timbre and structure of the song. Like actually note it in a notebook. This will get your mind prepared to make music by actively engaging it.

When actively listening to music, make sure to concentrate on the appropriate parts of a song. Lots of producers obsess over the kicks, hi-hats, and the bass, but at the end of the day, it’s the melody that people remember. So do yourself a favor and try to concrete things that you can easily absorb. You will probably not remember the exact timbre of a hi-hat, but you might remember the melody enough to replicate something similar later.

 

Listening To A DJ Set Will Help You Prepare To Make Music

Many students tell me that they find inspiration while they are in the club, and can’t get home quick enough in order to harness it. A solution? Listen to a DJ set for 20 minutes to an hour. The longer you prepare the better. 

An image of someone DJing, which is a great way to prepare to make music

You can take notes on the transitions and compositional intricacies, something that you couldn’t do while in a club. While not exactly the same as a club, I often find that my students say that all the ideas they had in the club start manifesting themselves again.

One thing I like to do is put on a mix while scrolling through and listening back to the samples on my hard drive. By doing so, you can hear when a sample fits nicely into the mix, which you can categorize, and use later. Just make sure the volume levels match what you’re doing in Ableton. You want your samples to vaguely fit inside the mix, rather than being the predominant sound. This is a helpful way of managing samples as well, because otherwise when you’re just scrolling through samples, and not comparing it to music, you’re just comparing the samples to air.


DJing To Help Prepare To Make Music

I think DJing is a great way to prepare to make music. Similar to the other suggestions, DJing is a powerful form of active listening. DJing trains your ears to deeply understand the structure and mix of a song. You can easily add or subtract frequencies to see how they modify the song. You can also hear where transitions happen, allowing you to build your tracks out to be more DJ-friendly (if that is one of your goals). 

 

Build Categorized Playlists To Help Prepare To Make Music

I know earlier I said that it’s easier to concentrate on the melody of the song, rather than the rhythm of it. So what are you supposed to do when you want to work on a specific aspect of a song? Well, as you’re listening, throw the songs into playlists that are labeled based on the aspects of the song that are inspiring. So have one for the melody, have one for that really specific hi-hat or kick. Have one for a bassline. Then when you want to prepare to make music, you can go back to those playlists and warm-up actively listening to those.

 

Take Inspiration From Your Inspiration’s Inspirations

Another way to prepare to make music is to learn from the people who inspire your inspiration. For instance, I’m inspired by Ricardo Villalobos, so I often read articles about him. Through these articles, I found out that he’s inspired by pianist Keith Jarrett. Jarrett does not make electronic music, however, he’s clearly had a large influence on the genre, whether he knows it or not. So, naturally, I listen to Jarrett to see if I can’t harness some of that inspiration.

 

There Are Many Ways To Prepare Your Brain

At the end of the day, the goal is to get your brain engaged. You can play video games while listening to music, read a book, or go for a run. You can also paint, or write. These are all just suggestions and you should find the one that gets your mind warmed up, since as I stated at the beginning of the article, a mind is not a leg – there is no uniformity.

 

When life is hard, make more music

If you’ve been following the news since the beginning of 2020—what’s happening in Australia (the fires and political situation), Iran and USA, etc.—it’s clear that our lives are all effected by things we feel like we have very little control over. For many, global events and news may increase feelings of helplessness, anxiety, or frustration.

Feeling a lack of control is not alien to musicians, who constantly deal with the feeling of not being able to control their path or destination. Notable situations are, for instance, not knowing if a label liked your demo, not knowing sales figures of a release, waiting for news from a promoter that booked you, not knowing if people are really enjoying your music, not knowing how to really have the mix you want, etc.

Not knowing” becomes an uncertainty that musicians face daily, and it can haunt their thoughts. Some people also feel like the world is spinning out of control, so what, exactly, can we do about it?

For those of you who are musicians and going through a tough time, once piece of advice I can give you is make more music. To people who complain that they don’t have time, I say, find and make time for it as if your life depends on it. I know this sounds like an exaggeration, but I’d like to explain you why, in my case, it really, really helped, and I wouldn’t be exaggerating in saying it almost saved my life. As a musician or creative person, making time for making music is incredibly important.

Grieving, mourning

In a span of 3 years, I lost both my parents. My father passed away first in 2016—a huge shock as he was very healthy. I was left completely destabilized and felt a deep void which I couldn’t see the end of. The only thing that was really helping was to listen to ambient music when I’d be home. I would play music by William Basinski, which is lofi and loopy as hell, but very comforting in a way. In 1998, just before I decided to make music as Pheek, I had a rough separation from my girlfriend at the time and I was basically invalid, at home, not doing anything but listening to the same CD over and over. Music was the only thing that made sense at that moment, and made my path through life seem less negative. Listening to familiar music was a need for me, and my brain demanded that I listen to a specific sound. Nowadays, with the power and reach of what Spotify can do (or even YouTube), you can get suggestions based on what you listen to, and while being soothed, you also discover similar music. There’s an endless amount of music, and as a musician, you have the power to add to it, and to be inspired by it.

That break-up and these intense listening days led me to want to make my own, healing music. Plastikman’s music led to the creation of my Pheek moniker. The loss of my father caused me to make ambient music for 8 months, mostly creating soothing loops that I would listen while commuting or at home. What’s the use of making music if you don’t do it for yourself first?

I find that this is something people I work with sometimes seem to miss. It becomes more of a dispensable thing—the focus becomes where your song will end up instead of making music for oneself. I don’t mean to be judgemental, but this is something I often see.

Now, when it comes to immersing yourself in music creation and dedicating time to spend on it, it gives your brain something to focus on. To combat my own fears about climate uncertainty, I decided to register to this website called Weeklybeats, where artists are asked to make one song per week, for the entire year. I feel that I need to completely push myself to do more music for myself. I’ve been at the service of others for the last year, but recently I felt like my music was too low of a priority in my life and that my skills as a producer had suffered.

When the brain is on a mission, it will focus on resolving problems, being creative in new ideas, and finding inspiration everywhere. If you can swap the hopelessness with a creative flow, even if it doesn’t bring any solution to the world’s problems, at least you’re not being a problem yourself: you are making music and music brings people together.

Making time for making music

“I don’t have time” is the number one excuse I hear when I talk about making more music. I make it myself regularly, and also suffer also from the “I don’t know how I’ll do that” excuse. You get a better sense of free time when you become a parent. When you have a child, all your time and energy is focused on the family and you’ll forget about yourself and your own needs. A 5-minute moment of free time can feel like gold. I felt a shift in my music production when I had my son in 2010. I couldn’t just wake up and make music anymore, there were other responsibilities to manage, and everything felt out of control. I managed to use every 10-minute moment I could find to have some work done on music projects.

How did I do it while raising a child? I’m not totally sure, but I can recommend some ways to dedicate more time to making music in your own life that helped me:

  1. Move a “lighter” setup of your studio closer to your routine. This one might be difficult to figure out, but 100% of the people I talked into doing this came back to me with positive feedback. Most of the time, people have their studio in a far-off portion of their life. That means, studio either out of their apartment or in a room that is in the back of it. It’s slightly disconnected physically from you and it won’t have a place in your life, apart from being a image in your mind. I often encourage people to bring a simpler studio in the living room, kitchen, or the place they hang out the most. I also suggest to leaving your computer or gear on so that you can, without any delay, just pass by and play with music. You can leave a loop playing while cooking/cleaning. Having music as a physically proximal part of your life is a huge eye-opener for new methods of production.
  2. Go mobile. This might sound a bit weird, but making a bit of music on the go is quite fun. Don’t forget that a lot of people use Airpods to listen to music or will listen to it while commuting. I’m not saying that you’ll make a masterpiece this way, but if you can start a few ideas on your way to school or work, then you have something that keeps you busy and creative. I would also recommend to record some moments of your life. We see a lot of pictures on social networks, but not enough audio; recording moments and listening to them later is a surreal experience, plus you can use parts of those for songs, too. There’s nothing more surprising than adding a bit of random conversation into a song.
  3. Don’t wait on perfect conditions to work. The number one procrastination excuse that comes up for a lot of people is that they need certain “acceptable” conditions to make music. It can be with regards to the setup they have, missing gear, missing software, or time of day. Some people believe they can only make music at a specific moment of the day. If you are giving power to these conditions, you are not in control of your creativity and believe that external forces influence you. Sorry, but not sorry, this is false. You, and only you, can make it happen, and it starts by sitting down and just doing it. If it feels overwhelming, then commit to 5 minutes of music and see where that leads you.
  4. Commit. This is why I decided to take on the challenge of doing 1 track a week for 2020. Instead of making an album this year, I’ll make tons of music, on a regular basis. You can commit in many other ways. It can be by partnering with friends to swap music, or making music for local DJs or for your Bandcamp.
  5. Let yourself and your process be free-form. The biggest enemy of creativity is a mold or formula, and if you always follow the same patterns, you will forget that music can even be a simple few notes repeated. Try to listen to 60s-70s neo-classical, minimalist music to redefine how you perceive what you do. Let yourself explore random ideas. A song can be a simple idea and you don’t always need to make a template or a track. It can be something imperfect, recorded out of the blue. There are no rules, be free!

SEE ALSO : Music Making Is Problem Solving

More tips about working with samples in Ableton

Recently I was doing some mixing and I came across multiple projects in a row that had some major issues with regards to working with samples in Ableton. One of them is a personal issue: taking a loop from a sample bank and using it as is, but there’s no real rule about doing this; if you bought the samples you are entitled to use them in any way you want.

While I do use samples in my work sometimes, I do it with the perspective that they are a starting point, or to be able to quickly pinpoint the mood of the track that I’m aiming for. There’s nothing more vibe-killing than starting to work on a new song but losing 30 minutes trying to find a fitting sound, like hi-hats for instance. One of my personal rules is to spend less than 30 minutes tweaking my first round of song production. This means that the initial phase is really about focusing in on the main idea of the song. The rest is accessory and could be anything. If you mute any parts except the main idea(s), the song will still be what it is.

So why is it important to shape the samples?

Well basically, the real answer is about tying it all together to give personality to the project you’re working on. You want it to work as a whole, which means you might want to start by tuning the sample to the idea.

Before I go on, let me give you a couple of suggestions regarding how to edit the samples in ways to make them unique.

I always find that pitch and length are the quickest ways to alter something and easily trick the brain into thinking the sounds are completely new. Even pitching down by 1 or 2 steps or shortening a sample to half its original size will already give you something different. Another trick is to change where the sample starts. For instance, with kicks, I sometimes like to start playing the sample later in the sound to have access to a different attack or custom make my own using the sampler.

TIP: I love to have the sounds change length as the song progresses, either by using an LFO or by manually tweaking the sounds. ex. Snares that gets longer create tensions in a breakdown.

In a past post, I covered the use of samples more in-depth, and I thought I could provide a bit more in detail about how you can spice things up with samples, but this time, using effects or Ableton’s internal tools.

Reverb: Reverb is a classic, where simply dropping it on a sound will alter it, but the down side is that it muffles the transients which can make things muddy. Solution: Use a Send/AUX channel where you’ll use a transient designer to (drastically) remove the attack of the incoming signal and then add a reverb. In doing this, you’ll be only adding reverb to the decay of the sound while the transient stays untouched.

Freeze-verb: One option you’ll find in the reverb from Ableton is the freeze function. Passing a sound through it and freezing it is like having a snapshot of the sound that is on hold. Resample that. I like to pitch it up or down and layering it with the original sound which allows you to add richness and harmonics to the original.

Gate: So few people use Ableton’s Gate! It’s one of my favorite. The best way to use it is by side-chaining it with a signal. Think of this as the opposite of a compressor in side-chaining; the gate will let the gated sound play only when the other is also playing, and you also have an envelope on it that lets you shape the sound. This is practical for many uses such as layering percussive loops, where the one that is side-chained will play only when it detects sound, which makes a mix way clearer. In sound design, this is pretty fun for creating multiple layers to a dull sound, by using various different incoming signals.

Granular Synthesis: This is by far my favorite tool to rearrange and morph sounds. It will stretch sounds, which gives them this grainy texture and something slightly scattered sounding too. Melda Production has a great granular synth that is multi-band, which provides lots of room to treat the layers of a sound in many ways. If you find it fun, Melda also has two other plugins that are great for messing up sound with mTransformer and mMorph.

Grain Delay, looped: A classic and sometimes overused effect, this one is great as you can automate pitch over delay. But it is still a great tool to use along with the Looper. They do really nice things when combined. I like to make really shorts loops of sounds going through the Grain Delay. This is also fun if you take the sound and double its length, as it will be stretched up, granular style, creating interesting texture along the way.

Resampling: This is the base of all sound design in Ableton, but to resample yourself tweaking a sound is by far the most organic way to treat sound. If you have PUSH, it’s even more fun as you can create a macro, assign certain parameters to the knobs and then record yourself just playing with the knobs. You can then chop the session to the parts you prefer.

I hope this was useful!

SEE ALSO : Learning how to make melodies

Creating organic sounding music with mixing

I’m always a bit reluctant to discuss mixing on this blog. The biggest mistake people make in mixing is to apply all the advice they can find online to their own work. This approach might not work, mostly because there are so many factors that can change how you approach your mix that it can be counter-productive. The best way to write about mixing would be to explain something and then include the many cascades of “but if…”, with regards to how you’d like to sound. So, to wrap things properly, I’ll cover one topic I love in music, which is how to get a very organic sounding music.

There are many ways to approach electronic music. There’s the very mechanical way of layering loops, which is popular in techno or using modular synths/eurorack. These styles, like many others, have a couple main things in mind: making people dance or showcasing craftsmanship in presenting sounds. One of the first things you want to do before you start mixing is to know exactly what style you want to create before you start.

Wherever you’re at and whatever the genre you’re working in, you can always infuse your mix with a more organic feel. Everyone has their own way, but sometimes it’s about finding your style.

In my case, I’ve always been interested in two things, which are reasons why people work with me for mixing:

  1. While I use electronic sounds, I want to keep them feeling as if they’re as organic and real as possible. You’ll have the impression of being immersed in a space of living unreal things and the clash between the synthetic and the real, which is for me, one of the most interesting things to listen to.
  2. I like to design spaces that could exist. The idea of putting sounds in place brings the listener into a bubble-like experience, which is the exact opposite of commercial music where a wall of sound is the desired aesthetic.

There’s nothing wrong with commercial music, it just has a different goal than I do in mixing.

What are some descriptions we can apply to an organic, warm, rounded sound?

  • A “real” sounding feel.
  • Distance between sounds to create the impression of space.
  • Clear low end, very rounded.
  • Controlled transients that aren’t aggressive.
  • Resonances that aren’t piercing.
  • Wideness without losing your center.
  • Usually a “darker” mix with some presence of air in the highs.
  • Keeping a more flat tone but with thick mids.

Now with this list in mind, there are approaches of how to deal with your mix and production.

Select quality samples to start with. It’s very common for me to come back to a client and say “I have to change your kick, clap and snare”, mostly because the source material has issues. Thi is because many people download crap sounds via torrents or free sites which usually haven’t been handled properly. See sounds and samples as the ingredients you cook food with: you want to compose with the best sounding material. I’m not a fan of mastered samples, as I noticed they sometimes distort if we compress them so I usually want something with a headroom. TIP: Get sounds at 24b minimum, invest some bucks to get something that is thick and clear sounding.

Remove resonances as you go. Don’t wait for a mixdown to fix everything. I usually make my loops and will correct a resonance right away if I hear one. I’ll freeze and flatten right away, sometimes even save the sample for future use. To fix a resonance, use a high quality EQ with a Q of about 5 maximum and then set your EQ to hear what you are cutting. Then you lower down of about 4-5db to start with. TIP: Use Fabfilter Pro-Q3, buy it here.

Control transients with a transient designer instead of an EQ. I find that many people aren’t sensitive of how annoying in a mix percussion can be if the transients are too aggressive. That can sometimes be only noticed once you compress. I like to use a Transient designer to lower the impact; just a little on the ones that are annoying. TIP: Try the TS-1 Transient Shaper, buy it here.

Remove all frequencies under the fundamental of the bass. This means removing the rogue resonances and to monitor what you’re cutting. If your bass or kick hits at 31hz, then remove anything under that frequency. EQ the kick and all other low end sound independently.

Support the low end with a sub since to add roundness. Anemic or confused low end can be swapped or supported by a sine wav synth that can be there to enhance the fundamental frequency and make it rounder. It make a big difference affecting the warmth of the sound. Ableton’s Operator will do, or basically any synth with oscillators you can design.

High-pass your busses with a filter at 12db/octave. Make sure you use a good EQ that lets you pick the slope and high-pass not so aggressively to have a more analog feel to your mix.

Thicken the mids with a multiband compressor. I like to compress the mids between 200 and 800. Often clients get it wrong around there and this range is where the real body of your song lies. The presence it provides on a sound system is dramatic if you control it properly.

Use clear reverb with short decay. Quality reverbs are always a game changer. I like to use different busses at 10% wet and with a very fast decay. Can’t hear it? You’re doing it right. TIP: Use TSAR-1 reverb for the win.

Add air with a high quality EQ. Please note this is a difficult thing to do properly and can be achieved with high-end EQ for better results. Just notch up your melodic buss with a notch up around 15khz. It add very subtle mix and is ear pleasing in little quantity. TIP: Turbo EQ by Melda is a hot air balloon.

Double Compress all your melodic sounds. This can be done with 2 compressors in parallel. The first one will be set to 50% wet and the second at 75%. The settings have to be played with but this will thicken and warm up everything.

Now for space, I make 3 groups: sounds that are subtle (background), sounds that are in the middle part of the space, and space that are upfront. A mistake many people make is to have too many sounds upfront and no subtle background sounds. A good guideline is 20% upfront as the stars of your song, then 65% are in the middle, and the remaining 15% are the subtle background details. If your balance is right, your song will automatically breathe and feel right.

All the upfront sounds are the ones where the volume is at 100% (not at 0db!), the ones in the middle are generally at 75%, and the others are varied between 50% to 30% volume. When you mix, always play with the volume of your sound to see where it sits best in the mix. Bring it too low, too loud, in the middle. You’ll find a spot where it feels like it is alive.

Lastly, one important thing is to understand that sounds have relationships to one another. This is sometimes “call and response”, or some are cousins… they are interacting and talking to each other. The more you support a dialog between your sounds, the more fun it is to listen to. Plus it makes things feel more organic!

SEE ALSO : More tips about working with samples in Ableton

The best EQ plugins and various EQ’ing tips (Pt. II)

In my previous post regarding the best EQ plugins, I covered some of my favorite EQs and some of their uses. After receiving many compliments about that post, I’ve decided to continue with a part two. In the following post, I’ll share a few tricks with you that you can easily do yourself when facing certain mixing situations, and I’ll also briefly outline compression.

Filters

In case you didn’t already know, EQs are filters; really complex mathematics which each developer has coded in more or less slightly different formulas. This explains why some EQs are really expensive: because of the time invested in perfecting the curves. Many people don’t realize it, but EQs do sound different from one another and you can tell once you have a high quality sound system.

“Most people don’t have a high quality system, so what’s the point…”, you say.

Well, if you use high quality tools, in the end, your regular sounds will be “upgraded” in quality too, which will eventually make a difference where ever you play them.

The number one tip for a better mix is to use filters; this alone can make dramatic improvements.

For instance, your kicks might sound muddy if you don’t remove the garbage frequencies that are below the fundamental note of it. If this sounds complicated, let me explain it in the most simple terms:

  1. Use your EQ and the first point on the left should be switched to filter, then low cut.
  2. The slope should be put to 24db/octave.
  3. Then roll it to 20hz to start with and then go up frequencies until you hear your kick losing power. If that happens, you’re now filtering too high and you have to roll back a bit.
  4. My general rule is to cut kicks at 20hz by default.

Now that tip was for kicks alone, but you should apply this idea to basically everything in your mix. However, besides the kick, I wouldn’t use a slope of 24db/octave on anything else unless there are big issues. It’s up to you to experiment but if you want to test something interesting, try 18 or 12 for cutting other sounds and you’ll see that this leaves less of a digital feel, giving your sounds clarity and warmth.

I’d also cut the highs where they’re not needed, but not too much either.

Percussion, melodies, and high pitched sounds such as hi-hats would benefit from a 6db/octave, high cut filter; this smooths things in a lovely way.

Some of my favorite filters for this kind of use are:

EVE-AT1 from Kuassa

SliceEQ by Kilohertz

PSP MasterQ2: Smooth!

Sharp cuts

Surgical, sharp and static cuts are very useful for a ringing resonance. Many people ask how to spot it these and how to know if it’s really something to cut or if the it’s something to do with the acoustic of the room. There’s no real way to know but to often cross validate with reference tracks.

So often, I get clients sending me a project in Ableton and I see really odd cuts. Is that bad?

Yes and no.

First off, if you use Ableton’s native EQ, switch it immediately to oversampling mode for better quality.

Second, cutting might change something in your environment but you’ll also permanently cut frequencies that might not be needed to change, which could also potentially induce phasing issues (i.e. during the entire length of the song).

*Note – do not use too many EQs in one chain because that will definitely cause phasing!

So, how do you spot one rogue frequency?

Sometimes I just use a spectrum meter to get hints if I can’t pinpoint where it is. Try to always use a spectrum meter on your master to have an overall indication of your mix. If you see some sounds that start to poke above 0dB, this *might* be a problem; not always, but it could. What you want to look for is one thin spike coming up out loud about +3-6dB. This might really be an issue.

My instinct would be to try to lower the volume of the sound itself if that’s possible. Sometimes it’s not and that’s when you use an EQ.

  1. Isolate the sound in the appropriate channel.
  2. Drop your EQ of choice (see below for suggestions).
  3. Pick an EQ point, set it to the frequency you spotted, then adjust the Q to 3-4. Cut 4dB to start with, but more if needed.
  4. On the EQ, there should be a output gain. If you have cut that frequency away, it might be great to just increase the gain by about the half of what you have cut away. Ideally I like to compress but we’ll get into that later.

TIP: Avoid sharp cuts in the low end. That can cause issues such as phasing, muddiness. If you really have to, make sure to use a mono-utility after.

I revealed some of my favorite EQ plugins in the first post in this series, but I’ll add some more:

Cambridge EQ by Universal Audio: Works amazing on synths and melodies.

AE600 by McDSP.

Voxengo CurveEQ: Solid on percussive content.

Valley cuts, boosts, and shelving

Many readings on the subject of EQ’ing only will recommend that if you need to boost, go moderate and try to have a very low Q to have an open curve. However, there are really no rules on what you should or shouldn’t do. Explore, fail, and be audacious, because sometimes great things come out of it.

My only red flag would be on those really complicated, several points EQ curves you can do in Fabfilter ProQ2. This sometimes induce weird resonances when you’ll bounce, which is no good for mastering unless you are OK with annoying people’s ears.

Also, think differently. If you’re going to use 3-5 points that are all boosting, then why not start by turning up the gain on your EQ’s output and cut down whatever you don’t want.

But if you boost, I like to have a Q below 1. It gives really interesting results!

  • For instance, try to boost 2-3db at 500hz to instantly give presence and body to a song.
  • Try it at 8khz to add a lush, bright presence to metallic percussion.
  • Boost at 1khz on your snare to make them pop out of your mix.

Experiment like this. At first it will appear subtle but with practice, great results will come.

My favorites of the moment:

Sie-Q by SoundToys for really doing beautiful shelving.

MEqualizer by MeldaProduction.

 

SEE ALSO :

Tips and recommendations for compression (Pt. 1)

Playing Electronic Music Live (Part 6, final)

I recently played a live set at StereoBar in Montreal for the launch of my album Returning Home and it was very interesting to create a live set from scratch, following the advice of this series I’ve written over the past few months about playing electronic music live. I’d like to share with you a bit about how it went, as well as some personal notes I took for future performances.

Notes about preparation of my set

Returning Home has many tracks; I wasn’t sure how I’d approach them in a live context considering they are all pretty intricate, full of details, and pretty much impossible to execute as the recorded version(s). I decided to go through all the songs and export the stems for each group of sounds – plus making sure the kick and bass would be isolated – so I could control how they would come and go in the set.

Exporting stems took me a while. I had also exported stems from certain songs that weren’t included in the album but that I wanted to play. I had a good 17 tracks ready, with about 8 channels exported per track. I imported everything into my new live set, and added everything in the right columns and with the colors I needed. I also started chopping the stems into sections so I could trigger some parts spontaneously.

After a few days of geeking out, I started playing the tracks to see how the flow felt and to see if the transitions were going well. I played with effects; trying to spice up the main ideas to surprise people. As I kept rehearsing and trying to see how to play the songs, I found myself becoming very bored out of what I was hearing. The thing is, when you spent months making an album, you get to a point where you can’t listen to your own music anymore – and playing it as is felt too safe, too simple.

Live at MUTEK Chile 2006

Live in Zurich 2005

I scrapped everything. I remember thinking that this whole series advising people on how to play live was crap but I realized that after going through it myself again, it still had a lot of value, but even I had done the preparations wrong. I remembered then how I used to LOVE playing live, 15 years ago and had a flashback excited me: pure improvisation. I realized that using stems wasn’t improvising enough and that my music is in itself, pure chaos.

I went back to my pool of sounds which didn’t make the cut originally and started chopping sounds, deconstructing stems, and re-exporting new parts. Then I started creating a space where I could remix the whole album on the spot, plus adding unexpected, unused sounds. Basically, it was combining the bass of track 2 with the melody of track 7, then percussion of track 4…pure remixing. I found a core idea for each moment of my set, and left a lot of space for reinterpretation. It worked and I was having a lot of fun.

My setup for this Set

I was using Ableton Push and 2 Novation Launch XLs, as mixers for all the channels (I ended up using 10). For some reason, each time I’ve tried using PUSH live, it has never really helped, but I felt this time I wanted to use it. I love the Novations so using 2 felt really amazing.

Limitations: My Macbook pro only has 2 USB ports so I needed a port to accommodate multiple items.

Soundcheck at Stereo

Soundcheck

Arriving at the venue, I felt really confident; perhaps too confident. The soundcheck went so smoothly that – in my experience – when that happens it gives you the feeling that something will go wrong later.

I had spent time in the studio carefully tweaking each channel with EQs to make sure the sound wasn’t to harsh or piercing. I also decided to use a Manley compression from UAD on the master which made everything really smooth. It was important to use a reference track as an EQ curve. It really paid off in Soundcheck so I didn’t have to do much; everything went so smooth.

TIP: Listen to your reference track before soundchecking, then play it to adjust an EQ on the master.

The show/performance

After a great start, shit started to hit the fan. As a track was playing, I noticed my mixer wasn’t responding and realized it had rebooted. By rebooting, it made the second mixer crash and the PUSH too. I wasn’t even 5 minutes in and the wheel of death was spinning on my Mac. I waited patiently and luckily it went back to normal. But after this glitch, I disconnected one of the Novations to plug it directly in my computer instead of the USB-Hub I had bought the same day (cheap connectors are always a big mistake!). PUSH was frozen and not doing anything, I had to activate clips with my mouse. Luckily from my experience playing live for so many years, I was able to do this in a way so that people didn’t notice. The Novations kept crashing one after the other. Each time I had to unplug them patiently to restart them, and then the wheel would go off on my computer; for some reason they would work for a good 20 minutes but then crash again.

Luckily, no one noticed anything! I could have really played a great show that night if everything had worked properly because Stereobar has the perfect setup for me…it was a bit disappointing, but I still received a lot of good feedback.

MUTEK Montreal 2006

Live in London 2005

After the show

Despite the technical issues, it was great show and fun nonetheless.

To summarize, a few tips here based on this live experience:

  • Don’t buy gear the same day without testing. Soundcheck are never 100% of what a show will be and can never be a real test.
  • Deactivate Ableton Live’s auto-update feature. It actually upgraded to a version the day before with a bug in it – a pretty big one. I had to reinstall the software and that was stressful. Thanks for the swift reply from Ableton tech support on that one.
  • Never panic when problems arise. Most of the time, people don’t notice.
  • Try to avoid shitty USB hubs! I’m still trying to find a better alternative.

I hope this series was helpful!

Sound design: create the sounds you imagine inside your head

You might never really be able to make the sounds that you envision in your mind 100% accurately using sound design, but I can offer you some advice to build on a good starting point to make something close to it. Just like in painting and cinema, often our imagination will play tricks on us; you might imagine what you think could be “the best idea ever” but once you actually get down to working on it, you quickly realize that there’s a world of difference between your imagination and the final output.

So, is there a way to use sound design to transpose those ideas into something practical?

Yes, absolutely.

Sounds have a structure, shape, and form, and when you “hear” something in your mind, you have to translate this idea into a precise description which will enable you to get you started on actually creating it.

To get a good start in the sound design process, ask yourself the following question:

Can you explain your idea verbally?

The first step is to analyze the physical characteristics of the sound. Keep in mind that sound has multiple axes and characteristics:

  • Time: A sound can be short, long or somewhere in the middle. The temporal aspect of a sound is basically its duration.
  • Envelope: The ADSR (Attack, Decay, Sustain, and Release) envelop trick is what I’m referring to here. Foe example, does your sound start out loud and then fade away, or maybe it does the opposite?
  • Frequency Spectrum: Is the pitch of the sound high or low?
  • Harmonic or inharmonic: Does your sound have tonality or is it more noise based?
  • Position: Is your sound static or panned on a side? Is it moving?

Secondly, you need to identify a source material for your sound and decide how it will be shaped:

  • In a previous post, I talked about layering sounds. One great way to get started is to try to find already existing sounds, and layer them in a way to get to something close to what you have in mind. For instance, layering a tom, a clap and a snap–when glued together–will form a rounded sound that extends up into the highs. When combining your sounds and layers, I recommend using a good compressor of the Opto or Vari-MU type; they are musical and create a great feel to your sound. Check out Native Instrument’s Vari Comp or KUSH’s Novatron that came in strong in 2017 as one of the best tool on the market for a reasonable price.
  • If you’re more into synthesis, you can do something like experiment with a subtractive approach by using multiple Oscillators with a good filter. I usually use Ableton’s Operator but this year U-He’s Repro 5 has been really nice for me in terms of sound design; lovely, creamy sounds. I like to have my low end and mids set to sine and then will shape harmonics set to square or triangle. Experiment endlessly!
  • Another interesting option would be to use field recordings. You might think this approach is a bit odd, but you can even try to make the sound with your mouth, or try to find objects to hit; you’ll always end up with an interesting sound. You’ll also be surprised by how much you can do with recording your own voice. For a great, affordable field recorder, check out any of the field recorders by Zoom, they even make one that can plug into your iPhone, which is quite handy.

Sound design - Native Instruments' Vari Comp

Native Instruments’ Vari Comp

 

And lastly, once you’ve established your source material, you can then dive into carving your sound:

Time: there are a few things you can do to manipulate the time and duration of your sounds. Pitch-shifting something to slow it down or speed it up is fun. Granular synthesis is always an option as well; one of many options being the Mangle VST. I also enjoy having a dark reverb with a tail to stretch the length of a sound. Any reverb can do a good job here but you can easily experiment with free options found on KVR.

Sound design - The Mangle granular synthesizer

The Mangle granular synthesizer

Envelope: If you have a big chunk of sound that you want to shape, there are again multiple options to shape it. If you’re using Ableton, the easiest way would be to use Ableton’s envelope inside the clip, and draw out the envelope of the volume or gain. There are also a few volume envelope tools out there; one you can look into that I like is Volume Shaper by Cable Guys; really powerful and fun.

  • TIP: If you want a really fast transient on your envelope, try using a Transient shaper. Transient shapers can also help with sustain.
  • TIP2: A VCA compressor with a slow attack can also give you great results.

Frequency Spectrum: As I mentioned, personally I like to experiment with a pitch shifter, but I also experiment with a 3-band EQ and a compressor; mostly a FET one which is a bit more aggressive (I recommend to learn more about different compressor types if you’re unfamiliar with them all). This way you can control specific parts of your sound and manipulate which parts you want to have more emphasis. This is definitely not the only way you can do this; there are so many other creative ways to use an EQ alone (such as the UAD Cambridge),  but I like to combine multiple effects and then play with them as I am searching for the right sound.

Harmonics: Harmonics can often be manipulated with saturation and/or distortion. If you’re looking for a good distortion tool, you can check out the Scream VST by Citonic which offers tons of options. Otherwise, the Saturation Knob by Softubes is a great tool for a range of subtle to drastic changes. I suggest playing with filters as well; they can enhance some part of your sounds, especially if you use them in parallel (through a send/bus track).

Position: Try out any panner. There are multiple panning plugins on the market, but I’d be careful to make sure you aren’t making your sound spin too much in the design phase; you don’t know what the position of your other sounds will be yet and you might end up undoing everything later anyways. Beef up the sound with a chorus or a doubler to manipulate the sound’s position even more, but as I mentioned, try not to go too crazy with the panning when creating just a single sound.

These are just a few sound design techniques and ideas to get you started in creating and designing the sound you imagine inside your head. Have fun!

 

Bonus: A good way to come up with unexpected design ideas is to use randomization. Here’s an amazing tutorial by my buddy offthesky.

 

SEE ALSO :  Creating Beauty Out of Ugly Sounds 

Dynamic Sound Layering and Design

Sound layering can be a very complex or very simple technique in music creation and production depending on your goals. In a past post, I gave some really basic sound design tips; I have a lot of readers who are just starting out with mixing and producing, so it made sense to start with something less intense. This second post about sound design, however, will focus on something a little bit more advanced but still very simple: sound layering. It’s actually surprising to me to see so many people who ignore techniques that allow them to get the most out of layering, so I thought I’d write about it.

First off, I would like to discuss Ableton’s groups. Many people use them as the equivalent of busses, where all the grouped sounds will all be treated in a specific ways and yes, that approach works really well indeed. However, I prefer using a solo channel as a bus instead and use groups for sound design or classification. A good example is for kicks or claps, which are usually a combination of up to 3 different samples or sound sources (ex. 2 samples, 1 synth, etc.). Basically, since each sound is a collection of multiple samples, then I could say that they will work best as a group.

Visually it looks better and is easier to manage, and additionally you can also put effects on the group to glue all the sounds together – generally you’ll need a compressor and one or two EQs for a relatively uniform group. Once I’ve done that, I usually like to have an additional bus for all sounds (eg. groups) that will glue everything else together.

A second point to keep in mind, is that there’s always multiple ways to do sound design. Keep in mind that what I show you here is simply how I do it but there are other people who use different techniques; I try to keep it simple. Two methods Ableton will describe here that I like are the arranger and the drum rack.

If you work in the arranger, you drop sounds in the channel and it’s an easy way to see the layers. I like turning off the grid to do this so it feels a bit more natural.

You can adjust the volume for each layer and tweak the EQ to get part of the spectrum of one sound, and the complementary part of another.

You can do the same with the attack and release; there are so many options. I really recommend using the faders too for more control. So basically, volume, EQ are your best friends here. Brainworx has an amazing filter I recommend, it’s super solid for sound design.

If you prefer, you could also mainly use the Drum Rack to do the same thing. Load up the same samples in the pads of the tool and then sequence them by MIDI instead of putting them in the arranger. Some people dislike working this way because they can’t easily see the frequency shape of the audio file. But the advantage of this approach is that you get to have access to more options to manipulate your sounds, like the extra controls in Ableton’s Sampler window.

What I think is best in the end is to combine both the sound arrangement layering, with the an extra channel of Sampler use so you can work on constant movements. The main thing you want from your sound design, is a feeling of liveliness and emotion. The sampler has LFOs you can assign to filters, panning, or volume, which is a subtle touch that creates a nice layer of movement and liveliness. In the same way, I’d even add a synth of your choice to give richness to the sound with oscillators working to reinforce the fundamentals with a discrete tone; more complex sound layering.

Finally, on the group of the sound itself, I would add nothing but an EQ and compressor to “glue” everything together, but you could also use reverb to broaden your stereo image. These techniques should help you improve your sound design skills!

SEE ALSO : Sound design: create the sounds you imagine inside your head