For people following this blog it should come as no surprise when I say that for me Ableton Live and Reason are the main environments which I use the most (where Live is obviously often extended with Max for Live).
I’m pretty passionate about those products and my passion has led to some people following up on some of my ideas (Hi Rob! ;-)).
When it comes to Live ‘versus’ Reason there’s one question I often see popping up: “How do I do X in Reason as I do in Live?”, or the other way around. Live is all about putting stuff together, configuring it and then using it whereas Reason requires you to get your hands a little ‘dirtier’ so to speak. So today I’d like to go over some common options in Live and Reason to check up on how Live differs from Reason. Or if you’re a Reason user: how Reason differs from Live 😆
“Hardware” vs. “Easy”
Although Reason devices may look like hardware the association doesn’t stop there. It’s also what I came to appreciate about Reason so much: it doesn’t only try to mimic hardware when it comes to looks, it also tries to simulate its behaviour to a certain degree; although Reason can set up a lot of its signal routes automatically, there will always be moments where you have no choice but to flip the rack and handle things yourself; for example when setting up side chain compression.
Live on the other hand has a somewhat ‘easy’ look about it, especially with the default colour scheme. I think its that specific look and feel which made so many people consider Live to be relatively easy. However; the moment you start to dive in a little you’ll notice that there’s much more to Live’s workflow and its instruments (such as Operator or Analog). Although Reason absolutely tops it when it comes to signal routing capabilities you shouldn’t underestimate Live’s capabilities either. Although, come to think of it, in all fairness I think Max is the real winner there 😀
SO, lets go over some common tasks to see how its done in both environments and where these differ. In my examples I’ll be using an audio track and a MIDI track in both environments, if applicable and if not stated otherwise.
As we all hopefully know; side-chain compression means that you’re compressing a sound signal based on another sound signal. This can be useful if you want to make sure that one signal doesn’t clash with another by reducing its dynamic volume.
In Live this is relatively easy, you simply pull up the Compressor audio effect on the track which you want to compress, click the arrow in the title bar and then you can turn on the Sidechain option. Now all that’s left to do is select the track from which you want to use the audio. Then select from which point you want to grab it; the pristine sound, the sound after it has been processed by any insert effect(s) or the sound after it passed the mixer section.
In Reason this works a little bit different. The main difference is that while Live uses a per-channel approach Reason does this per device. Or put differently: it allows you to apply the compressor to anything you want, whether its an individual instrument or your entire audio score.
And to make things even more interesting: Reason knows several ways to apply sidechain compression. Either using the MClass Compressor device or by using the new mixer section.
Here I have a setup comparable to that of Live; 2 instruments connected to the line mixer and my goal is to compress the sound of Thor with that of the Kong. To that end I already added the MClass compressor.
There are several ways how we can set this up. For example; we could configure the compressor as a send effect on the line mixer, that way you can control the amount of signal you want compressed by using the auxiliary knob on the mixer channel which you want to compress. In my example I chose to fully compress Thor’s signal with that of Kong. So without using the mixer.
Here you see the back of the compressor. As you can see it has an audio in and output section as well as a sidechain input. That is what we’ll be using. Basically we’re going to re-route the audio out of Thor and send it to the audio input of the compressor. The audio out of the compressor is then connected to the line mixer again; so the compressor basically sits between Thor and the mixer.
Then we need to split up the audio output from the Kong device and send it both to the line mixer as well as the sidechain input on the compressor. Because the sidechain input is only used in the compressor device we don’t have to worry about this finding its way back in our main audio score:
This may look a bit overwhelming, especially for people who don’t use Reason, but it looks more complicated than it actually is. In fact; if you select Thor when pulling in the compressor then Reason will automatically re-route the signal so that Thor’s audio output will first pass through the compressor. All you have to do is split the signal from Kong using a Spider audio merger/splitter and then routing it to both the mixer as well as the sidechain input.
Although this is more work than simply pulling in a compressor device and clicking a few buttons it also gives us much more flexibility. Like I said; in Live I can only use the signal of a single channel. However; if you use a return channel (either as input or sidechain input) then you can use signals from multiple channels as well. Here I don’t need extra steps to send more audio signals into the sidechain input. All you’d need is an extra Spider audio merger/splitter…
But there’s more…
Using Reason’s main mixer
As you can see here the new main Reason mixer has a dynamics section. This is basically a build in compressor and limiter which you can use on the incoming signal. It even allows you to change the routing and processing of the signal. By default, as shown here, the audio will first pass the dynamics section, then the equalizer section and finally pass the insert effects section.
However, you can setup this route any way you want to; you can put either the equalizer or insert section above the dynamics section, thus resulting in the signal first passing either of them before it enters the dynamics or put both of them on top so effectively reversing the entire route.
And as if that wasn’t enough there’s more. As you can see on the right the equalizer section is split in two parts; there is a filter section with a lowpass and highpass filter and the actual 4-band equalizer section itself. You can set it up so that the filter section is only used on the sound before it enters the dynamics section. This can be very useful if you want to apply dynamics to specific frequency ranges, this is often used for “de-essing”.
SO… As mentioned above you can use the dynamics section with a sidechain input as well. This is where the ‘key’ button comes into play (it basically means that the dynamics section will be ‘keyed’ by an external signal).
What is important here is to know that every channel strip in the main mixer is represented by a device in the rack. The mix channel device basically represents a MIDI channel whereas the audio device… Well, I guess that’s obvious enough 🙂
Here you see the back of the mixer channel device. Notice how it has both an audio input as well as a dynamics (sidechain) input? That is how you can set up sidechain compression using the main mixer. The earlier shown example of using Kong to compress the signal of Thor would look like this when you’d want to use the main mixer:
As you can see you still need to split Kong’s audio signal, but the actual configuration is now setup in the main mixer. As a side note; using sidechain compression on the main channel (the so called master section) is done in exactly the same way. On the back of the master section device you’ll also find a dynamics input. If you connect an audio signal then this will be used for sidechain compression on your whole music score.
A send effect is basically an audio effect which you setup once and then apply to one or several sound signals. The main idea is to send some of your original audio signal(s) to the send effect which results are then mixed with the original (‘dry’) sound.
In Live you use the so called return tracks to set this up. A return track, as seen on the right, is basically a regular track with the specific difference that it can only contain sound effects on its channel strip. Next its send controls are disabled by default in order to prevent accidental sound loops (sending the sound from Return-A back into Return-A could result in an endless loop).
You can pull up the send/return sections by using Live’s section control icons, which can always be found in the lower right part of the top section of your screen, or by using the keyboard shortcuts of course.
After you setup the sound effect on one of the return tracks all that’s left to do is to use a tracks sends section to tell Live that it should send a certain amount of the signal to the matching return track. From there it gets processed and send to the master track where it gets mixed with the original.
Of course you can change this behaviour any way you want; as I’ve shown above you can use the I/O section to set it up that a track should only send its output to a return track (‘Sends effect’) instead of the master, this can also be applied to the Return tracks.
Live can use up to a maximum of 12 return tracks in one liveset.
Fortunately for us its also quite easy to set up Send effects in Reason, though as could be expected Reason provides more than one way to set them up. The usual way is to setup Send effects which can then be controlled using the main mixer section or you can setup Send effects for your own selection of devices. However; if you do that then you won’t be able to control these using the main mixer section but will need the rack for that.
Here you see the back of the Master device; as you can see it has 8 send and return channels which can be used to connect audio effects, which in their turn can then be setup using the main mixer.
If you select the master device and then add audio effects these will automatically be setup as Sends effects. But of course you can also set them up manually.
In the main mixer the master channel has a FX sends section which you can see on the left. Here you see what it looks like after I’ve connected the RV7000 reverb, the Line 6 bass amplifier and the SoftTube saturation knob as sends effect on the master device.
The dials set the level of the dry signal which is send to the sound effect, which make this a special setup since you can configure the sound level on multiple places in the mixer.
The edit button here makes it easier on you to access the specific effects device should you require any more tuning. After you click it you’ll be taken back to the rack and the audio effect will be automatically selected.
The other parts are the FX return section on the master channel as well as the sends section which is available on any of the other channels.
In the send section you can setup if the channel should actually use the specific Sends effect and if so at what level (the green knob). The green knob is comparable to the Sends knob which reside on the tracks in Ableton Live, as shown on the left.
A cool feature which isn’t available in Live is the ‘Pre’ button. This tells the mixer that the dry signal send into the Sends effect shouldn’t be influenced by the channels fader.
Normally the amount of signal which gets send to the effect is also affected by the main channel fader; the lower the fader the less sound gets send to your master channel but it also decreases the signal amount which gets sent to the Sends effect. The ‘Pre’ button, which stands for “Pre-Fader”, tells the mixer that the signal should not be affected by the fader at all.
Next is the FX return section in the master channel. The blue level knob is comparable to the fader section on a return track in Live; it affects how much of the processed signal gets sent back to the master channel. The pan knob should be obvious; this affects the spread over which channels the signal gets send back.
The edit button is the same as the edit button I mentioned above in the FX Sends section of the master channel and finally the M button mutes the effect entirely, and thus affects all the channels which use this particular Sends effect.
As you can see Live and Reason don’t differ that much when it comes to the basis of setting up the Send effects, but Reason’s mixer section does provide a lot more control over how you want to treat the signal which you want to process.
As I showed above; you can only connect a maximum of 8 send effects to the master device. However, this doesn’t have to stop you from using more Send effects for specific devices:
Just because Reason now uses a new mixer section which came with Record doesn’t mean the previous mixer devices have now become obsolete. You can easily pull in a 14:2 mixer device, set it up to use its own batch of Send effects (up to 4 effects per mixer device) and then connect your instruments to it.
So as you can see; where Live can be pretty straight forward, Reason can allow for a broader approach with more diversity.
Say you create a nice sound using either Analog or Thor and now you want to record the sound you produced instead of the MIDI data. Ever since Reason 6 got released it can also record and process audio, but like the previous topics it does come with a little twist.
In Live you have quite a few options to accomplish this. The easiest way (IMO) is to change the Audio From setting on the audio track to which you want to record, and set it to ‘Resampling’. This tells the audio track to pick up its input from the master track, or put differently: to grab the entire sound score.
Of course you can also select an individual track; if all I’m after is the sound from my Analog device then I could simply point the Audio From to track 2 which contains Analog.
To record the actual audio I prefer to start by recording my play in a clip, and then use the scene launch to start both the clip playback as well as the clip recording.
To set this up you need to enable the option “Start recording on scene launch” which you can find in the ‘record warp launch’ section in the Live preference screen.
This makes sure that if you press scene launch all the clips which reside on armed tracks will start recording. SO basically I’m starting playback of my MIDI clip while at the same time the clip on my audio track starts recording. After that its easy; you record the music part and you end up with an audio clip (representing an audio file).
Another option is to freeze your track.
You can freeze tracks in both the Arrangement or Session view and the result can be seen to the right. I recorded some stuff in 2 clips on the track which contained an Analog device, then froze the track and finally used the Live browser to check up on the current project.
As you can see Live has created .wav files for every clip which I had recorded on this track.
This is a very quick and easy way to generate usable audio data from your own recordings. Note that if you have frozen a track you can also flatten it. This will tell Live that it should replace your track contents with the (static) sound data which was generated during the freezing process.
Both options have their pro’s and cons, obviously, but even so it does make it very easy to quickly produce workable audio data.
In Reason there are also a lot of different ways to accomplish this. The first thing to notice is the changed I/O section in Reason 6′ hardware device (in comparison to Reason 4 and 5 (without Record)). As you can see it has an Audio Input section as well as a Sampling Input section. The sampling input allows you to record sounds which are produced by anything you desire. Normally you’d use one of the ‘Audio Input’ connectors; these represent audio inputs which are available on your computer (line in, microphone(s), etc.). As a side note; in my examples Reason is rewired into Live, so you don’t see any audio inputs being available.
Now, if we take another look at the back of a mix channel device you’ll notice all the way to the right a section called “direct out”, together with a warning that this section breaks internal routing.
What does this do? Simple; the moment you use this outlet all the data generated through this Mix channel device will be send out directly and won’t find its way to the master section (unless you route it there yourself of course).
So, this is what I like to do:
I took the direct out and connected it directly to the sampling input, and now I end up with this setup:
So now I end up with audio coming into Reason which I can then record. After this its easy; I simply pull in a sampling device such as the NN-XT, hit ‘record sample’ and just start playing:
However, this process is basically sampling your data which then ends up directly in a sampler device. But what if you just want to get hold of the audio data and have it put directly onto an audio track? So without the hassle of finding where it was saved in your Reason song, and without somehow pulling that onto a track yourself?
To do that you can also simply record (‘resample’) the data, just like I’ve shown with Ableton Live.
If you scroll up a bit and look at the ‘Mix’ channel device (the one directly under the master section, called ‘Synced channel’) you’ll notice that it has an option called “Rec source” right below the VU meters. This tells Reason that it should use the audio from this mix channel and treat it as if it were a regular audio input from which it should record audio data. However, keep in mind that this trick only works if you’re using Reason stand alone. This won’t work in ReWire mode! You can use the previous sampling input trick while Reason runs as ReWire slave though…
Update: My above conclusion is actually wrong. This trick will also work if Reason is being used as ReWire slave, however it won’t work if you’re using the global record button on the ReWire master (Ableton Live in my case). Instead you need to manually click the record button in Reason, start playing and the audio track will easily record (sample) the audio as it is generated by Reason.
And here you see the result. I’ve turned the ‘synced channel’ into an audio input, added an audio track device and as you can see this source is now easily picked up by the audio track. All which is left to do is to record the generated audio in the sequencer.
Here you see the result of me recording both the MIDI data and the audio data into the Reason sequencer.
Easy as that.
Now, I didn’t show the entire process but that has to wait for another time. The reason is that this post is getting quite big and to be honest I’m still not very familiar with the Reason sequencer. I know my way around; but hardly enough to start giving tips about it.
And to finish…
Ableton Live isn’t the only DAW which can directly create audio files by processing recorded MIDI data. This process, called ‘bouncing’, is also available on Reason.
The only thing you need to do is select the option “Bounce Mixer Channels…” which you can find in the ‘File’ menu. From there you can chose which channels you want to bounce and where the generated audio data should be stored. It can be stored as an individual .wav files on your hard disk or you can create a new audio track and have it placed there.
So basically Live and Reason also don’t differ too much when it comes to bouncing. Sure; both use a slightly different approach but in the end the results and methods ((re)sampling, recording, bouncing) are all the same.
I hope this post has given you guys a good impression of how to do some common tasks in both Live and Reason. As you can see Live is often pretty much straight forward whereas Reason requires you to do some specific settings yourself (like routing audio into the sidechain of the compressor).
Its also what I like so much about both programs. In Live I focus more on configuring and tweaking instruments and making presets, while in Reason I get the feeling that the environment is much more inviting to experimenting; building your own patches. Often that ‘building’ is quite literal too.
As such both programs really extend on each other in my opinion.
And there you have it 😉