Creating sample-based orchestral arrangements
Download this article in Word format
Preparing music for virtual instruments
Setting up samples
Mixing and mastering
Today recording a real orchestra is quite expensive. Arranging an orchestral piece using sample libraries can be done today even on home PC and does not need expensive hardware.
Depending on the music difficulty, your experience and preparations for arranging an orchestra using samples and the quality of the resulting product, you will usually spend about 30-200 minutes for each 5 minutes of a single instrument part. So, you may spend about 40-240 working hours arranging a 20-minute piece, consisting of 20 parts.
In this article I use the following terms:
- Orchestral instrument - the real instrument of the orchestral musician.
- Virtual instrument or articulation instrument - related generally to Kontakt single instrument, a collection of sounds of one articulation (sometimes several articulations in case of round-robin or keyswitch instrument).
- Multi-instrument - a collection of several articulation instruments, which are usually linked to one MIDI track (sometimes several MIDI tracks in case of percussion).
In this part of the article I will talk about the general preparations, that do not depend on any particular project. You should execute this tasks before even starting to write music in best case.
1. Computer hardware
When using virtual instruments to create orchestra sound there is one major problem: usually you need to use a lot of virtual instruments. For example, for single section of first violins I use 48 instruments (16 instruments for C, F and S microphones), for a contrabassoon section I use 21 instruments (7 instruments for C, F and S microphones). So for an orchestra I usually need about 500-700 virtual instruments.
You can use two approaches for using virtual instruments:
- Load all instruments simultaneously. You can change any note and any sample in any instrument without having to wait for loading and unloading the instrument you need. This approach needs a lot of resources.
- Load instruments only for the parts you are working with, e.g. for first violins. The problem is that when you need to change something in other parts, you need to wait for loading an instrument and then unloading (freezing) it. Time to wait depends on the length of the arrangement and the characteristics of your computer hardware. It can usually take from 10 seconds to 10 minutes.
Loading separate instruments is difficult, requires manual work and waiting. If you want to load all the instruments simultaneously, you require a powerful computer.
One more way to reduce hardware requirements is to create a separate one-microphone multi-instrument setup for each part. Now you can work with this 3 time less resource-consuming setup and then bring in all microphones setup into each kontakt instance before mastering. Also, you can be just ok with only F microphone even for mastering (a bit less interesting in terms of reverberation).
Another way is to use only C microphones and artificial reverberation (using plugin or some hardware).
Each virtual instrument requires:
- Memory (RAM). You will usually need 7-10 Megabytes of RAM for each instrument (when using 30 kB preload buffer size) or more if you use longer preload buffer sizes (increases disk throughput usage). You can reduce this size with decreasing the preload buffer sizes or purging unused samples (see below).
- Disk throughput (SSD or HDD). One HDD can work with 100-300 instruments, while one SSD can work with 500-900 instruments. You usually want to use SSD, because it also reduces the loading times greatly. You can reduce disk throughput requirements by using longer preload buffer sizes (increases RAM usage). One useful supplement you get with SSD is a little reduction of project load times. This means that you will waste less time waiting for it to load. Also, you can usually start working and playing simple passages even before the load finishes, because SSD speed allows this.
- CPU cycles. If average CPU usage is below 70%, it usually works OK. 2 cores of i5 3400MHz can work with 500-900 instruments without problems.
You can tune preload buffer size to balance the usage of RAM and disk throughput. If your disk cannot cope with it, you increase buffer size. If RAM is running out, you decrease buffer size.
Minimum requirements for 600 simultaneous Kontakt 5 virtual instruments (East West) are:
- i5-650 Clarkdale 3400MHz, 2 cores, L2 512Kb, L3 4096Kb or more.
- 8 GB 1333MHz DRAM or more.
- HDD SATA 7200 1Tb
Recommended requirements are:
- i7-3770 Ivy Bridge 3500MHz, 4 cores, L2 1024Kb, L3 8192Kb or more.
- 16 GB 1333MHz DRAM or more.
- SSD SATA 256Gb
- HDD SATA 7200 3Tb
If you do not want to load all instruments simultaneously (using freeze), first setup is recommended for you.
Remember also that resource usage will depend greatly on the type of music. The more notes are simultaneously triggered and played, the more load on resources you will get.
There is one important feature that you can use to optimize RAM usage (and then you will be able to increase the preload buffer size, which will also decrease disk load): purging unused samples. This technique is actively discussed in the Internet. The principle is that you do not load the sample into memory, until it is played first time. This means that you need to purge all samples, play the piece once from the beginning to the end and then save the project. After that only those samples will be loaded into RAM each time you open the project. For me this reduces RAM usage 2-3 times.
Two little problems with purging samples:
- When you start to select samples for notes, they are loaded first time, which can cause delay. To overcome this, you can play the new samples one more time, without a delay now. The other way to cancel delay effect is to load all samples of the orchestral instruments you are working at into RAM (no purge).
- When you change articulations of notes, unused articulations are not removed from RAM automatically. This means that to preserve RAM you can periodically update the purge cache by clearing it and playing the whole peace from the beginning. Usually this is not needed.
Here is the diagram of effects on main elements of computer hardware requirements:
One useful side effect of the samples purging is that you can see, which articulation instruments are not used in your project at all. You can remove them from the multi-instrument, if you want. But remember, that they use little RAM when they are purged.
As far as you do not need real-time audio features, you do not need expensive multichannel audio card and you can use huge audio buffers (in the settings of DAW) to limit computer resources usage. Your choice will be a high-quality 2-channel audio-card (USB or Firewire) and good studio monitors or headphones.
2. Computer software
You will need the following software:
- Operating System: Windows 7 64 bit
- Music scoring software: Sibelius or Finale.
- Digital audio workstation (DAW) software: Pro tools, Logic, Cubase or other.
- Virtual instrument software: Kontakt or other (depending on the library).
- Orchestra sampling library: Vienna Symphonic Library, East West Quantum Leap Orchestra Library (EWQLSO) or other.
- If your DAW is 32 bit, you can use bridging software to overcome the 2 GB RAM limit: jBridge or other.
To work with articulations effectively, DAW software needs to allow multiple channels per midi track and color code notes by MIDI channel.
Setting up music software can be difficult. I recommend that you ask your system administrator to set everything up.
In this article I will use the following software to show the process: Sibelius, Cubase, Kontakt, East West Quantum Leap Orchestra Library Platinum XP, jBridge.
Do not forget to organize regular backup of all your configurations and project files. Also, do not forget to save a new copy of project before you make some sufficient changes. I recommend that you also set up automatic save each 10-15 minutes.
3. Creating multi-instruments
When computer is ready, you need to create your own multi-instrument setup for each orchestra instrument type (e.g. violas).
A multi-instrument is a collection of articulation instruments, usually duplicated for several microphone positions (e.g. Central, Front, Surround). Articulation instruments for the same articulation and different microphone positions (e.g. Viola Staccato Central, Viola Staccato Front and Viola Staccato Surround) are triggered simultaneously with the same note of MIDI channel, that is linked to this articulation.
Here is the example of instrument (Viola) with only two articulations (Staccato and Legato) with three microphone positions (Central, Front, Surround):
This is how MIDI tracks look like in DAW (Cubase):
Each note in a MIDI track can belong to one of the MIDI channels (usually 16 total channels). Here is an example for Viola MIDI track. Each color replesents one MIDI channel. When the track is played, each note is played using the corresponding virtual instrument.
Each channel can be played by one or more instruments. This is how Kontakt with instruments look like (Solo tuba, only Central microphone instruments are shown):
First thing you have to do is to create a shared plan of articulation numbers for all multi-instruments. This will help you not to get confused with the hundreds of articulations in different orchestral instruments. I can suggest for example (for East West Symphonic Orchestra):
||Crescendo / Expression
||Tremolo / Frullato / Effect
||Tremolo / Frullato / Effect
Of course you can change these rules for some particular orchestral instruments, when you do not have enough space, but this will keep you more structured.
You can use keyswitch instruments, which contain several articulations per instrument, or use only single-articulation instruments. Using keyswitch instruments has the following benefits and drawbacks:
- You can pack more than 16 articulations into a standard 16 MIDI channel stack.
- You have to setup only one keyswitch instrument instead of setting up every articulation instrument (saves your time).
- Keyswitch instruments may be more difficult to use in terms of selecting articulations for notes. If you use keyswitches, you are usually recommended to use Expression Maps (Cubase) or similar technology.
My general recommendation is to use keyswitch instrument any time when you need more than one articulation, packed into this keyswitch instrument. If you do not use Expression Maps, I do not recommend you to use keyswitch instruments, because this will make piano roll difficult to read.
When using VST Expressions, CC mapping to instruments can become more difficult.
Now you can load the articulations into your Kontakt and save multi-instruments to disk (do not use options to pack samples into the multi-instrument).
For all orchestral percussion, you can create one multi-instrument. I recommend that you create several multi-instruments for groups of percussion by sound for flexible mixing:
- bass drums and timpani;
- snare drums;
- metal percussion;
- wood percussion.
Of course, for percussion you do not use the same plan of articulation numbers, as for other orchestral instruments.
I recommend that you use 0 db volume for all the articulation instruments, because most of the sample libraries are set up for realistic balance.
If you have limited computer resources, you can delete some of the articulations, that you do not use, from the virtual instruments in particular projects, but this will require manual work for each project. Do no also forget about the freeze function of the DAW, which can significantly reduce the requirements for resources. If you have problems because your DAW is 32 bit, you can use jBridge.
In Cubase version 5.1 there are several bugs concerning VST Expressions (e.g. CC1 problem). If you use Cubase 5.1, I recommend not to use VST Expressions and thus limit track to 16 articulations. In this case you can need an additional track for some instruments, e.g. Violins. Best way is to assign the least-used effects to a second track of the instruments (e.g. Violin sul tasto and other).
4. Creating project template
When you work with orchestra samples, usually a lot of tracks and buses are involved. You can create a project template for you not to set up everything else with each new project.
You can set up the following:
- Expression Maps, if you want to use them (see below).
- Named MIDI tracks (e.g. Viole), linked to virtual instruments (e.g. Viole).
- Named virtual instruments with articulations, linked to named instrument microphone buses (e.g. Viole C, Viole F, Viole S)
- Link instrument microphone buses to intermediate buses. See explanation below.
- Link intermediate buses to master bus.
- Mastering effects in the master bus if you need it. Be careful with mastering effects for you not to distort the sound and not to decrease the "readability" of the sound, which you will need greatly during the samples setup process. I usually use only limiter here.
- Instrument panning if you need it (e.g. for second violin mirroring). Pay attention, that different microphone channels in the library can have different panning settings. For example, in the EWQLSO Central microphone channels are panned in the Kontakt and Forward and Surround microphone channels are pre-panned, meaning that you already have panning inside the sample and you do not need to pan it even in Kontakt. When you want to mirror some instrument, use Stereo Dual Panner to send left channel totally to the right and send right channel totally to the left in each instrument microphone bus of the mirrored instrument (Central, Front and Surround).
- Set the volume of all tracks so that it is balanced and even. When creating project template, you do not need to change the volume to stress and emphasize some particular instruments. Quite the contrary, you need to get an even sound to hear all the instruments.
- I recommend that you group your MIDI tracks into Folder Tracks, so that you can mute and solo the whole sections (Strings, Woodwind, Brass, Percussion).
You can select articulation of each note manually (changing its MIDI channel or inserting additional keyswitch notes) or use some automation for it (Expression Maps in Cubase or similar technology). Using Expression Maps leads to the following benefits and drawbacks:
- You can see the name of articulation, not only MIDI channel number or keyswitch note name. This can help you concentrate on the music and save your time.
- You can change the way the articulation is played (e.g. from MIDI channel to keyswitch) without having to change all the notes in your project that use this articulation.
- Expression Maps are the additional layer of automation, which needs to be set up. This spends additional time and can cause some errors, which may be more difficult to figure out.
My general recommendation is to use Expression Maps, when you have more than 3 articulations per track. You can download ready to use Expression Maps for many sample libraries on the site of Steinberg.
One important thing is intermediate buses configuration. These are the buses that link instrument microphone buses to master bus. There are two main approaches to intermediate buses, which are microphone buses and instrument buses. Also section buses can be involved.
I usually use microphone section (Strings Front, Strings Center, ...) buses.
Of course, you can link instrument microphone buses to master bus directly, but this will not give you enough flexibility during mixing and mastering.
The more you prepare to the actual working on the project, the more time you can save later. I recommend that you think of the most frequent actions that you are going to do during the work on the project and automate it. This especially applies to mouse clicks, because using keyboard can often result in a more effective work, when we are talking about hours of continuous work.
The most frequent actions usually are:
- Assigning MIDI channel to a note or a group of notes. I wrote a script for AutoHotKey. You can download the script below. The script allows to select first 10 articulation with Numpad (1, 2, 3...) and next articulations with combination (Ctrl+1 for 11, Ctrl+2 for 12 and so on). It is based on automating mouse clicks on MIDI channel field and then inputting particular channel. Feel free to change the coordinates in notepad if they differ in your case (use Window Spy for hint).
- Assigning MIDI channel to CC. I use Numpad / and * keys to switch between "set MIDI channel for note" mode and "set MIDI channel for CC" mode in my Autohotkey script.
- Selecting up-beat notes (Alt+2, Alt+3) for decreasing velocity (Alt+S, Alt+D), decreasing length (Alt+W, Alt+E), or changing articulation. You can use Logical Editor presets with assigned keyboard shortcuts for it (download below).
- Extracting short notes (Alt+Shift+S) and zero length (Alt+Shift+Z).
- Increase (Alt+Q) and decrease (Alt+A) velocity.
- Nudge left ([ or Alt+Left or RightCtrl+MouseWheel) and right (] or Alt+Right or RightCtrl+MouseWheel).
- Nudge note ending left (- or Alt+Down or RightAlt+MouseWheel) and right (= or Alt+Up or RightAlt+MouseWheel).
- Toggle snap (J).
- Scroll up and down (MouseWheel).
- Scroll left and right (LeftCtrl+MouseWheel).
- Zoom vertically (U/Y or LeftAlt+MouseWheel).
- Zoom horizontally (G/H or LeftShift+MouseWheel).
- Randomizing note velocities and start times. You can use Logical Editor presets for it. See Randomizing section for details. Also, for start times you can use Quantize with Random Quantize parameter set.
- Open Expression Map Setup, open Kontakt Setup. You can make shortcuts for these in File - Key Commands.
When writing music for samplers you should pay attention to the following aspects:
- Sampler does not automatically add tempo variations to your score, as real orchestra does. You should consider adding tempo changes explicitly.
- Sampler does not change the articulations automatically. The more articulations you specify, the closer the sound will be to real orchestra. Usually you will need to set up all articulations in the DAW manually, but it would be easier, if they had already been written in the score. You should find the list of available articulations in your orchestral library for each instruments.
- You should pay attention to velocity changes, making them more smooth. Specify crescendos, diminuendos, forte, piano, sforzando, subito and other velocity marks.
- One important thing to make sampled orchestra music sound more realistic is understanding the resulting sound quality of different parts of the project, making them different and contrasting. This means that you should divide your work into small parts of 1-5 minutes, each of them having a short description of resulting music quality: agressive, harsh, tender, sweet, swinging, dancing, march, waltz, gloomy, transparent, uncertain, rhythmic, uneven and so on. This will help you to keep in mind your goal.
- When writing for groups of instruments, you can achieve more diverse sound by specifying, which of the instruments of the group will play the part.
- These include marks like "I", "II", "III" - meaning that only one instrument should play the part;
- marks like "a2", "a3", "unis" - meaning how many instruments should play the same part;
- marks like "div" - meaning that each instrument in a group should play one of the voices;
- marks like "meta", "tutti" - meaning that only half or all of instruments should play the part.
If you want to vary the quantity of players in each instrument group (Violins, Flutes etc.), this can be done two ways. Both ways increase requirements to your hardware, so be careful not to exceed your capabilities:
- Create separate tracks with different player quantities. Link each track to separate Kontakt instance, which is fully loaded with samples for this player quantity. Some libraries like EWQLSO provide several player quantities for each instrument. This means that for example for Staccato note you can use samples recorded by solo musician, by 11 violins and 18 violins. Keep in mind, that not all samples are present in each player quantity.
- Mix different number of tracks (each with separate kontakt instance) to create different combinations of instruments. This is a more difficult approach, because you can need more then two tracks and you will have to deal with samples interaction. If each track triggers the same sample, you will not have only effect of electonic sound multiplication (and possibly doubling if you move notes of one track), which is unlikely desirable. So you will have to guarantee that different samples are played by different tracks, which is difficult to achieve for round-robin instruments and impossible for non-round-robin.
After you finish your score, you should export the music into MIDI or MusicXML format. Although MIDI is usually used, you can give MusicXML a try.
Preparing music for virtual instruments
In this section I will talk about all the work that lies between the writing of music and selecting samples for each particular note.
1. Importing project into DAW
Importing music into a DAW project is an important step. If importing is made without a plan, you can end up importing many times, wasting your time. If you make mistakes during the import, you can have to through away all the following work and start again from this step.
- Open your project template.
- Import exported MIDI or MusicXML into your project template. You may need to import into a separate project and then copy music, tempo and time signatures to your template project, depending on your settings.
- If you get several MIDI clips in a single track, join all of them using the Glue tool to get a single MIDI clip per MIDI track. This will allow you to select all the notes of the track and make the following actions much easier.
- Check that each track contains only the notes of the instruments that the track was intended for. Sometimes when you import MIDI, several instruments may end up in the same track. This often is not a problem of import, but rather the problem of export. If you cannot find corresponding options to correct this, try to export to other format.
- Check that tempo and time signatures have been imported correctly, using Project tempo track editor. If you need to import MIDI notes and tempo in an existing project, you will have to first create temporary project, import MIDI, then export tempo track. After that you can import MIDI and tempo track in your existing project.
- Check the pitch of each instrument. Pay special attention to the transposing instruments, which sound an interval below or above the written pitch. Some programs, e.g. Finale may export transposing instruments correctly, except for the octave up/down instruments like Piccolo, Contra bassoon, Double bass. I suggest that you build an Excel table of your transposing instruments and their transpositions for this purpose. Then transpose the instruments, that did not transpose automatically correctly, so that in the DAW you see the sounding pitch.
- Remove the tracks and virtual instruments, that you do not need in your project to make navigation easier and save computer resources.
- To find corresponding notes in your DAW and music scoring program easier, organize the instruments in DAW in the same order, like in the scoring program.
- Guarantee, that you have exactly the same measure numbers in the DAW and scoring program. Usually for this you can change the "Starting measure number" in the scoring program.
- Notes can be doubled for different reasons (when two identical notes with the same MIDI channel are located at the same time). This is usually an export problem. Unfortunately, this can be very hard to detect. Doubling can remain unnoticed until you begin to randomize note positions. Since then it can ruin all the sound. The best way to detect double notes in Cubase is select all MIDI clips and use MIDI/Functions/Delete doubles. After that watch the redo history - if you see that something was deleted, then there were doubles. You can always delete doubles after importing all tracks as a rule.
- Before playing the project first time, remember to remove all unneeded information from the beginning of the tracks. These can be Program changes, Main volume changes, Pan changes and so on. If you start playing before you remove them, they can garble your instruments settings. You can do it by selecting all tracks and clicking Midi/Functions/Delete continuous controllers.
2. Joining multiple projects
When you work in the music scoring program, it is often convenient to divide the large piece into several files, each of them containing a part of the piece (e.g. Introduction, Part 1, Part 2, Final part). If you import each music scoring program file into a separate DAW project, you will have to do many actions repetitively in each project. This is often inefficient.
On the other hand, if you have a very large piece, more than an hour, the joint DAW project may grow very large, which can lead to long project saving times.
So, you have a choice. But usually, when a piece is less than an hour, I recommend that you join all the parts into a single DAW project.
To join several parts into the project you will need to:
- Import each part into a separate empty project (you do not need to use the template project for this). Now you have several Part Projects.
- Open the project template.
- Define the starting measure number for the first part (usually measure 1).
- Copy tempo and time signature changes from the Project tempo track of the Part Project to your template. This can involve manual work, because Cubase does not allow to import tempo and time signature changes from a cursor position.
- Copy all music from Part Project into the tracks of the project template.
- Repeat steps 3-5 for each Part Project.
- Join all the MIDI clips of each track into a single clip per track.
- Now you can go to the Importing project tutorial above and do all the checks and corrections.
Before getting the real work, you should set up all the tracks so that you can hear most of the music with the most simple articulations. I recommend that you set all melodic instruments to legato or QLeg (sustain) samples (usually MIDI channel 1) and all percussion instruments to simple hits (this may involve different MIDI channels and MIDI notes for different instruments).
This work will not take much time. You can select all tracks and use my Logical preset for it (Set channel to 1). As a result, you will get a "playable" project with primary dynamics. This is important for several reasons:
- You can pre-audition the project and check that all instruments play the desired pitch and that the notes are all in the correct time positions.
- You can check the tempo changes and set them up more precisely now.
- When you will start to select articulations for all the notes of each instrument, you will always be able to listen, how the articulation sounds with other instruments.
Setting up samples
I recommend that you save new version of your project approximately every hour during your work. Also, save to new version when you are going to make considerable change. Enabling auto-save every 15-30 minutes is also a good idea.
0. Know your samples
First and important thing you need to do is study samples that you have, compare them, choose samples that you need. Modern sample libraries can contain thousands of different articulations and studying them can take time. I recommend you this approach:
- Read manual to your library (see below).
- Listen to most articulations that you are interested in. You can skip DXF and keyswitch samples at this step.
- Before setting up samples for one particular instrument, listen to all samples you have and choose which you want to use.
- Make sure that you remember what samples you have, how they differ. Ideally, build some classification tree of samples for you.
- Try and audition different samples for different parts of your piece.
1. Conductor and musician
One of the major problems of putting orchestral music into samples is the balance between the conductor's point of view and musician's point of view. You should understand both approaches to create a consistent realistic arrangement.
The most general way is to think from the point of view of conductor. This way you should listen to all the orchestra. But this means, that you need to set up all the instruments to some particular position, before you can listen to the whole orchestra up to this position. This usually is much more time consuming, than walking through each track individually, because you have to open and close all the tracks and remember all the articulations of different tracks. When working from the conductor's point of view, do not forget to listen to instruments separately, because for well-founded orchestral sound each instrument should sound realistic alone.
In DAW the easiest way is to think only about musician, and listen only to the instrument track. This can lead to the fastest setup of each track, because you take the track and go through it, setting the samples, without being distracted by the other tracks. The major drawback is that you do not think about the result, which tends to become more messy, noisy and knotty. In this approach each virtual musician "plays" his part without listening to the others, which may sometime lead to different positions of accents and culminations, which may be both benefit or drawback, depending on the music style. But anyway, this usually is not welcomed during the whole piece.
My recommendations are the following:
- I recommend that you start working with the project with the conductor's approach. After setting samples for the first 1-3 minutes, I recommend that you switch to the musician's approach. This helps to filter out global problems that may lead to repeating the work from the beginning, e.g. wrong articulations.
- When working from musician's point of view, you should think about the whole orchestra in the following positions: at the start and end of the melody of your instrument, at the positions of significant changes of dynamics or articulations, at the transitions between the parts of musical piece (where the music quality changes). In some places, where many instruments are intended to play the same rhythm, you can switch to the conductor's approach.
- If the piece is long (more than 10 minutes long), I recommend that you start first selecting samples for the first 10 minutes (for all instruments successively), than for the next 20 minutes, than for the next 30 minutes and so on. Make divisions between these parts, where it is more convenient in music. Make sure, that you are satisfied with sound of all instruments before moving to the next part. You can mark, where you finished by cutting MIDI clip with the Cutter tool.
- Notation of each part should be precise enough to show each musician the positions of culminations and accents.
When you work with MIDI tracks, I recommend that you elaborate a certain sequence of instruments. This is important, because some instruments are more easier auditioned with the others. Most instruments in a section (e.g. Strings) should be processed from higher to lower register, because lower register usually contains less musical information and therefore sounds more natural with higher register instruments. Here is my sequence:
- Start with percussion. It is most easy to set up.
- Keyboard, bells, harp and other instruments without intonation go next.
- Woodwinds from high to low.
- Brass from high to low.
- Strings from high to low. Strings are usually the most difficult instruments, because they have more long notes, that may change in velocity over the time. Also, it is more difficult to make strings sound realistic, because string parts tend to contain more repetitive material, which sounds less realistic when using samplers.
2. Check score
When you go through the track in DAW, you need to compare it to the part in score program note by note. Soon you will know which differences may arise after importing the parts from your selected score program through your selected file format to your selected DAW. Then you will be able to look mostly for these differences, but I recommended that you check everything.
For example, when importing from Finale 2012 through MusicXML to Cubase 5 I found the following problems:
- Hairpins are not recognized. You have to change velocity gradually with the line tool.
- Sometimes dynamics music symbols are not recognized or are recognized in the wrong position. This leads to wrong note velocities.
- Grace notes may have zero length and exactly the same start position as the next note. You will have to increase the length, move the grace notes back and reduce the length of the previous notes. Easy way to do it when you have plenty of them is select them, set length and nudge left.
3. Select articulations
Selecting appropriate articulation is the most difficult and the most exciting procedure.
To select and setup correct articulations for orchestral instruments, you must know each instrument well, especially how it sounds naturally. If you do not, please find and listen to different solos of this instrument, video is desirable (e.g. in Youtube). Pay attention to both long notes and quick passages of short notes. Ideally you should listen, how real musician plays each of the articulations, that you have in your library and plan to use.
Also, you should spend time listening to the articulations in your library and comparing them to get your personal understanding of how they sound like and where they suit.
If the instrument is being played with many other at the same time, you can and usually should make the sound more dynamic by varying velocity, time, articulations and maybe even pitch. The part itself may sound a bit uneven, but this will lead to more realistic overal sound, when it is played with other instruments. If you do not do it, the sound will be more like electonic music, which you usually do not want.
I can give you several recommendations:
- Be very careful with special effects like glissando, slides, slurs, pendereki, clusters, bends, flutter, rips, shakes, trills, tremolos. I suggest that you use them a little in places, where they do not stand out, where they are covered with the sound of other instruments. The problems with effects are:
- Sampled effects are quickly memorized and if repeated, begin to sound non-realistic.
- Sampled effects rarely suit the music perfectly, because their speed, intonation and pitch stand out and cannot be configured.
- Articulations can be classified into "Short" (spiccato, staccato, marcato) and "Long" (legato, sustained). Generally, you better use short articulations, where the notes are short and long articulations, where the notes are long. When the notes are very short, consider using articulations, that are possible to be played in real life with this particular speed. This is not always the rule. For example, you can create a tender but fast melody using legato samples. Using short samples makes the music more harsh. If it must not be harsh, consider working with long samples. Usually short notes should be played with higher velocity and also you can even increase the portion of the Central microphone of the instrument, when these samples sound.
- When using any articulation, especially a short articulation, be very carefull with the release triggers. This is the sound that results from releasing the key (or ending of note in case of MIDI). The problem with release triggers is that they may sound not very realistic when the note is very short. This means that even if you know the particular articulation well, you may not know, how it will sound when played with short notes. I can advise two workarounds if you hear problems with short release triggers:
- You can edit the instrument to change how release trigger will sound. This may involve a lot of testing, because you will need to be assured, that new settings will sound good at any note length and any velocity.
- More easy way is just to increase the length of notes, that are played by short articulations. This often gives good results for short articulations like staccato, spiccato and alike, because they usually do not need short release triggers.
- Most of the time you will be using round-robin instruments, which include an automatic articulation changer. They usually make the whole sound more realistic, if they do not stand out too much. Round-robin (RR) instruments are especially very useful for fast passages. Round-robins may be classified into "natural" (e.g. natural 2-way staccato) and "artificial", when they are usually not played like this in real life (e.g. artificial 6-way staccato). You can read about them in the Orchestra library manuals below. There may be some seldom cases, when you will not want to use round-robin or will tend to use "natural" round-robin rather then "artificial": usually when very little instruments are playing at the same time and the difference between the strokes begins to sound unnatural. There are round robin instruments of different number of "ways". Usually greater number of ways is preferable. The main goal that you can achieve with round-robin is make a repetitive notes in a passage different each time it repeats. For this you will need to choose number of ways so that it is not multiple of the passage size (number of notes in passage). For example, if you have passage of 4 notes and 3-way round-robin instrument, you will get this: Note-1 (Way-1) Note-2 (Way-2) Note-3 (Way-3) Note-4 (Way-1) Note-1 (Way-2) Note-2 (Way-3) Note-3 (Way-1) Note-4 (Way-2). There are scripts to create random round-robin instruments. Create a random order of all samples, play through all of them, then create a new random order of all samples. And make sure that a single sample is never played twice in a row. You can use random round-robin if you cannot achieve natural sound other ways. Other way to make round-robin more random is to change some selected notes to a similar-sounding non-round-robin (NRR) channel. This effectively shifts the period of round-robin like this (3-note phrase and 3-way RR): Note-1 (Way-1) Note-2 (Way-2) Note-3 (NRR channel) Note-1 (Way-3) Note-2 (Way-1) Note-3 (Way-2) Note-1 (NRR channel) Note-2 (Way-3). You need to find a non-round-robin channel, that sounds closely to round-robin notes for this. This method is more manual and if you need to work with long passages or a lot of passages, think of a random round-robin script, described above.
- Each sample has a different pre-accent time. This usually adds an additional needed randomization, but can become an alignment problem, if the resulting beat is too distorted. I show the problem in the picture below. You can see 4 samples that have the same distance X (about 0.7 seconds) between their start times. QLegato sample starts sounding at the beginning, while Marcato sound is delayed by A time (0.1 seconds). The result is that audible distance between first and the second samples is (X-A), while distance between the second and the third samples is (X+2A). This can lead to rhythm distortion because of the Marcato delay. As usual, you have several options to cope with it:
- Just move the samples that have delays a little bit to the right. This involves selecting all the samples involved (Marcato in our case) and nudging them. For this you can select the whole articulation in the Articulation lane.
- You can also change the problem nki instrument, if you know, how to do it.
- Do not end up creating a soup of lots of different samples in one phrase. This may be ok, if samples have something in common, but if all of them are different, you can create a too diverse sound, which normal musician would not normally play. Always check, if your resulting sound resembles a real musician sound.
- You should avoid sending several notes to the same articulation instrument at the same time. This can lead to unwanted sound effects, because these samples are well-aligned and sound similar. If you have chords or double-stops, you better change some of the articulations so that all notes, that start at the same time, have different articulations.
- Feel free to mix several articulations to get new sounds. But do not forget, that for the purpose of mixing staccato with legato notes Dynamic Crossfade Accent instruments were created. You can read more about them in yout Orchestra library manual.
You will find more recommendations on using samples in other parts of this section.
All articulations can be divided into two types:
- Absolute length (play to the end if triggered). For these note length means nothing. They will start with note start and play to the end of recorded sample. This type is less common and it is usually used for effect samples. You should take into account that you cannot control the length of these samples with note length.
- Stretchable (play from beginning to the end of triggering note). These stop sound when note is finished: by fading main sample out and usually adding a release sound.
Most of stretchable articulations have release trails. These are additional samples, that are triggered when note ends. Some release sounds are loud, some are not very loud. Release samples are amplitude-matched to the sounding volume of main sample. The software analyzes the amplitude of the waveform when the
note is released, then activates the release trail, automatically adjusting
the release trail dynamics so the two samples blend seamlessly. For example, for some short samples, there can be release sound while the sample sounds and no release sample if sound ended (even if note continues). This means that if you want a sound to finish naturally (without blending other samples), you can increase note length to allow a natural end of sample to sound. After that, if instrument is set up correctly, there will be no release sound. This approach of using natural releases can greatly improve sound quality. This is true both for long and short samples.
Sometimes, note length should be adjusted (increased or decreased) to avoid unwanted clash of release sound with the next note sample.
You can also adjust length and volume (and other parameters of envelope) of release trails of any instrument (see EWQLSO manual).
To create realistic melodic passages you can apply beat modifications to them. These modifications add accent on the beat. Pay attention, that this accent is not always wanted in orchestral music.
- Decrease velocity of the n-th note. Can add a very slight change. Do not decrease note velocities too much.
- Decrease length of the n-th note. Can add average change. Usually the note is made two times shorter, which is equivalent to the musical term "staccato".
- Change articulation of the n-th note. This can add drastic change. Be careful, because this change can easily make clutter out of your composition. Also, see the different articulations alignment problem description in "Select samples" section.
You can use my Logical Editor presets to automate these changes (download above). You can link them to keyboard shortcuts.
I will show you these modifications using an example. Here you can see an initial passage:
Now the upbeat (2nd) notes are cut short (2 times) using a Logical Editor preset:
Now the upbeat (3rd) notes have decreased velocity using a Logical Editor preset:
Now upbeat (2nd) notes were selected with a Logical Editor preset and changed to other articulation (staccato). You can also see, that staccato notes were nudged to the left to avoid staccato alignment problem, discussed in the "Select articulations" section.
You can use randimization to increase realism of musical passages, especially very fast ones. The principle is simple: you add random values add realism:
- Note position. Adds rubato effect.
- Note velocity. Adds rubato effect.
- Note length. Use with caution especially for short samples, which can decrease in quality if note is shortened.
- Pitchwheel. Use to add subtle changes to intonation and increase realism. Pitchwheel changes do not effect release trails.
- CC values (CC1 modulation for DFX and other samples, that depend on CC1).
- Volume. Not recommended generally.
I recommend to add all the randomizations after you have set up everything you want. If you randomize and then edit, you can destroy randomization effects with some actions (e.g. drawing velocity of successive notes with mouse).
Be very careful with randomizing note position and velocity, because it can create very unnatural sound in the following cases:
- When instrument is standing out: very loud or no/little other instruments are playing.
- When the random change in position is very substantial compared to the distance between the notes.
- Some sampled instruments have uneven dynamic range due to sharp increase of dynamic at some point of dynamic range. If your velocity randomization takes place near this point, you can have very jerky result. Best thing you can do is avoid this point if you detect it (if you do not need this effect).
You can use my Logical Editor presets to automate these changes (download above). To imcrease effect: link to keyboard shortcut and apply several times.
You can see both random position and random velocities applied in the sample below. Script was run several times and the deviation is substantial:
One important drawback of position randomization is overlapping notes. This happens when position of two successive notes of the same pitch and channel is randomized and they overlap, like you see on the above picture (last two G notes). The problem here is that you get this result:
First note start -> Second note start -> First note stop -> Second note stop.
This leads to note cut off after the "First note stop", which effectively mutes the second note. This is a pretty difficult drawback to cope with, if you use overlapping notes of different channels on the same pitch. If you need to save this overlaps while canceling same-channel overlaps, you will need to remove overlaps in each channel separately. I use this algorithm to delete same-pitch overlaps if I have some different-channels overlaps, that I need to preserve (for each track separately):
- Select all MIDI clips in one track.
- Apply random velocity with shortcut [Ctrl+F].
- Apply random position with shortcut [Ctrl+R several times].
- Dissolve part (separate channels) into sublanes with shortcut [Shift+Alt+D].
- Select each lane and Remove overlaps mono in each lane with shortcut [Shift+Alt+F].
- Select all lanes and bounce selection with shortcut [Shift+Alt+B].
When you encounter a repeat in the same instrument or in the instrument of the similar timbre, you are usually very glad to copy and paste and save your time. Unfortunately, considering the artificial sound source that you use, will likely lead to a "tape effect", when the same melody is recorded on tape and then played several times. To avoid this issue I recommend:
- You can use randomizing functions to modify part each time it is repeated (random note position, random velocity - see "Randomize" section).
- You can use different beat or change downbeat to upbeat for cutting note length and velocity (see "Beat" section).
- You can use a little different articulations, or you can change articulations substantially, depending on your music goal.
8. Long notes
Working with long notes is usually very time-consuming, because it is difficult to automate.
When you have a long note, you usually want to add crescendo or diminuendo to it.
Long articulations can be classified into:
- No release (NR). Having no release. These articulations' length cannot be controled. If you start them, they sound for some fixed time. This type of articulations is usually special effects like frulls, when releasing sample cannot sound natural.
- Finite release (FR). Having release, but limited length. This means that you can start the sample, but if you do not release it, it will release itself in some time, even if you have not sent stop signal yet. The benefit of these samples is that they sound very natural if they are not released at all and they sound good if they are released. Also, there may be some time, when sample starts to fade out naturally and you release it, when release may sound not very good.
- Infinite release (IR). Having release and unlimited length. These samples start and wait for your signal to stop. This behaviour is achieved using looping inside of the articulation instrument. The benefit of these sounds is that you can use them for any note lengths, but they will usually not give you perfect release, although they will always give you good release like the samples of type 2, if they are released before the end of the sample.
You should always audition long notes from the beginning to the end, because of a number of caveats, which are described above.
You can build one long note from one articulation of any type, or you can build it joining several articulations of the same or different types. The best approach is usually always use single articulation, if you can.
There is another special type of articulation instruments in EWQLSO library: Dynamic Crossfade (DXF). They are usually of IR type. Dynamic Crossfade instruments are classified by what Velocity and Modulation parameters change in their sound. Most interesting type of DXF instruments is Acc Vel, where Accent is controled by note Velocity and sample Velocity is controlled by Modulation Controller. You can read more about DXF in the EWQLSO library manual (see below).
Here is the example of building a long note from three same articulations with increasing velocity. You cannot see it, but each of the three subnotes continue up to beginning of the first green note, because this helps to avoid release sound in the middle of the long note.
Here is an example of a very long note, created from IR-type articulation. This is also a DXF ACC VEL articulation and its sample velocity is controlled with Modulation, as you see it decreasing. Note Velocity here controls starting Accent.
If you cannot achieve sufficient attenuation with the Modulation control, you can use Main Volume control in addition. This is not recommended and should be used only if you cannot achieve the effect by other means.
You can draw arbitrary figures in modulation lane. For example:
Mixing and mastering
This part is very similar to the mixing and mastering of the recordings of real musicians. During the previous steps you created the artificial tracks, recorded by artificial musicians. Now you have to mix and master them.
I will write this section later.
- EastWest Quantum Leap Symphonic Orchestra Pro Expansion Operation Manual.
- Steinberg Cubase Operation Manual.
- Native Instruments Kontakt Application Reference.
- Sound On Sound article about EWQLSO
- Sound On Sound article about EWQLSO XP
- Sound On Sound: Arranging for strings 1
- George Strezov about realistic samples use
- Wiki: List of musical symbols
- Wiki: Glossary of musical terminology
- Here you can download my presets for Cubase 5.1 (copy into %APPDATA%\Steinberg\Cubase 5\Presets\Logical Edit).
- Here you can download my keyboard shortcuts for Cubase 5.1 (import in %APPDATA%\Steinberg\Cubase 5\Presets\KeyCommands , then enable in File/Key commands).
- Here you can download my Autohotkey script for Cubase 5.1 (run in Autohotkey).
- Here you can download my Cubase 5.1 template projects with all routing and setup Kontakt instruments for chamber (solo instruments) and full (group instruments) orchestra. If you need Kontakt multis (NKM), you can open templates and export NKM from each instrument. Also, there are some NKM files in this archive, but they are for demo purposes. Latest NKMs should be exported from CPR files.
- Here you can download my Reaper configuration, that is very close to Cubase config.
I would like to thank people who helped me write this article:
- Alexander Lavrov
- Sergey Arkhipenko
Article by Alexey Arkhipenko, 2013