Mixers/Consoles Order of channels on the mixer

Crisp image

Well-Known Member
Good Evening all,
I have a question for you that you may be able to answer.
When setting up a mixer for a band on just about all the YouTube videos I have watched they start with the drum kit on channel 1 through to X and then other instruments and finally vocal mics. Is there a reason for this? In my mind the vocal should be on the first channels and drums last.
Please enlighten me.

Regards
Geoff
 
I'm sure this is subjective, so here's my two cents --

I do middle school instrumental music concerts regularly ... for the main orchestra/band floor I ordered the channels front to back, left to right, what made sense to me from a visual perspective. However I have four wireless mics that only plugged into channels 12-16 on my DL1608 (the only Combo jacks out of the 16) so these are last.

For the few bands I've done I also put vocals first as you suggested, then guitars, then drums, special occasional use instruments if any are last.

-- John
 
Unless you're dealing with tours it's subjective and whatever works best for you. Kick on Channel 1 is common. I'm not sure where that originated -- probably in recording studios decades ago, but Kick tends to be a very important sound to get just right.

In the touring world, consistency helps for multiple engineers sharing the same patch. Also makes it easier between acts to line check if everyone generally line checks inputs in roughly the same order.

Depending on what mixer you're using and how many inputs, there's certainly a case to be made for breaking convention to putting lead vocals so you have them on the first fader layer.
 
Last edited:
I believe a lot of the reason many us have drums first and vocals last is a holdover from the large format analogue console days. Many 40 channel consoles were physically large, six feet wide was not uncommon. I'm 6'2" and often slept in my console lid to give you perspective. On those desks most had subgroups and, if you were lucky, VCAs mounted at the right end or offset right middle. Many engineers would use those groups as drums, percussion, horns, keys, what have you, and once you were done with sound check you were "mostly" mixing the band on subs/VCAs. Since the vocals are generally your money channels you wanted them underhand. By placing the vocals in the last couple of channels next to the subs/VCAs it was very easy to mix a large show in a foot or two instead of 6. Also in a lot of genres of popular music the mix is built around the kick, even in the worst conditions it's easy to find channel one.
 
There are no rules! Do what works for your workflow best. Before I get to a gig I lay out a console in Excel - I have templates built for every console that have some formatting done to show me where Layer Breaks are on a digital surface. The key thing for me isn't what channel it falls on, but how quickly I can get to the channels I need and that's entirely up to the users perspective.

That being said, there are typical ways you go about patching Drums - and that is Kick, Snare Top, Snare Bottom, Hi-Hat, Rack Tom 1, Rack Tom 2, Floor Tom, Overhead 1, Overhead 2, Hand Percussion Inputs - Subtract the things you don't use, but that's the typical flow that everyone I've ever worked with knows and follows.
 
Good Evening all,
I have a question for you that you may be able to answer.
When setting up a mixer for a band on just about all the YouTube videos I have watched they start with the drum kit on channel 1 through to X and then other instruments and finally vocal mics. Is there a reason for this? In my mind the vocal should be on the first channels and drums last.
Please enlighten me.

Regards
Geoff

Well... the speculation about being a recording leftover is probably closer to correct; at least it jives with my memories from my studio recording time... with 16/24 track, 2" machines it was common to put the bass guitar or kick on track 1 or 16/24 because any edge damage to the tape (guide scrapes, uneven tape pack, etc) would potentially affect the signal less that something critical like the star vocal. When syncing machines became a thing, the opposite track (16/24 or 1) was used for SMPTE linear time code (which is basically FSK).

In the studio I was assistant engineer at, the primary tracking engineer put bass on Track 1 and that stuck with me until about 30 years ago when someone "wised me up" that all the cool kids put the KK on 1.

The comments about big mixers with master/sub master/VCA/AUX master sections in the middle of the desk is spot on. Having the money channel$$$ near the FX sends, returns, and VCA/submasters made for an easier work flow.

That said, there's no particular reason to do things this way other than tradition.

"Do what you wanna, do what you will, just don't mess up your neighbor's thrill." _ Frank Zappa
 
Well there you go.. I have been enlightened again by the great and vast knowledge of CB. It makes perfect sense to me now why it would be done like that.
Thanks to everyone who shared their story.
Regards
Geoff
 
I have a music background, so I put the channels in score order generally, which puts vocals first, guitars, keys, bass, drums, etc. Then bus out the groups to do macro mixing with subs/DCA's/buses/etc depending on the board.
 
Personally, when I'm setting up the board for a large miced ensemble, I lay the channels out *in the physical order that the instruments are across the stage*.

Since that's most commonly our house jazz band, or visitors, it looks like

Piano H/L - Keys L/R - E Bass - Standup - Rh Guit - Ld Guit - Vocals - Drums K/S/H/R/S/F/L/R - Sax - Bone - Trump

or something close to that. Part of the reason is that we're usually tracking to PT10, and that makes it cleaner to have the tracks land in there in the order I see them as well.

Additionally, since we often mix on the iPad attached to our LS9-32, I try to put things in 8-fader groups, so (frex) I have the *whole drum kit on the same page*.
 
I follow Jay's L to R as it helps me shave valuable milliseconds off my response time - hand goes where eye was looking on stage. But with DCAs, VCAs, and layers ... and the need to flip layers on virtual console surfaces when mixing from iPads - one has to temper that by keeping the $$ channels (so to speak) near at hand. If I'm mixing by iPad, I usually have 2 and sometimes 3 running - one for main mix, a seconds for monitors/video send/streaming, and 3rd for deep dive into channel processing without losing any access to important mix channels. Like J Kowto, the mixer I tote around if nothing bigger/better is at hand is a DL1608 - rich, powerful, but not instant in layer switching.
 
I follow Jay's L to R as it helps me shave valuable milliseconds off my response time - hand goes where eye was looking on stage. But with DCAs, VCAs, and layers ... and the need to flip layers on virtual console surfaces when mixing from iPads - one has to temper that by keeping the $$ channels (so to speak) near at hand. If I'm mixing by iPad, I usually have 2 and sometimes 3 running - one for main mix, a seconds for monitors/video send/streaming, and 3rd for deep dive into channel processing without losing any access to important mix channels. Like J Kowto, the mixer I tote around if nothing bigger/better is at hand is a DL1608 - rich, powerful, but not instant in layer switching.

If I have to carry 3 tablets, a phone, a digital crystal ball and Ouija Board to mix audio... I'd rather have a physical surface with sufficient controls and metering. Really.
 
If I have to carry 3 tablets, a phone, a digital crystal ball and Ouija Board to mix audio... I'd rather have a physical surface with sufficient controls and metering. Really.
I'll send Oscar the Grouch along to help you with the Ouija board schlepping :)
 
No really, why would you even try and mix on multiple ipads? I use an iPad for walking around and checking things like front fill, checking onstage with the performers during soundcheck and occasionally during a show for easier way to get to an EQ screen. I would never try to mix a concert, let alone a theatrical performance on an iPad, let alone multiple one. We have consoles for a reason. I understand the argument from management to want to keep the board out of the house and I "can" mix on an iPad, however, it will never be as good as on a console. I can usually convince the money people that if they give up a couple of seats for my mix the results will be worth it.
 
you do it if its what you have available to you ... for example, the venue's ancient analog console died, they bought (stupidly) a new analog console with zero processing - and you've got a 16 channel extravaganza going on (musical theater) where you need compression, EQ, snapshots as well as house/monitor/film feeds.

Another time - a rental in a theater out deep in the woods - limited space, rental budget, and power.
 
No really, why would you even try and mix on multiple ipads? I use an iPad for walking around and checking things like front fill, checking onstage with the performers during soundcheck and occasionally during a show for easier way to get to an EQ screen. I would never try to mix a concert, let alone a theatrical performance on an iPad, let alone multiple one. We have consoles for a reason. I understand the argument from management to want to keep the board out of the house and I "can" mix on an iPad, however, it will never be as good as on a console. I can usually convince the money people that if they give up a couple of seats for my mix the results will be worth it.
Because lots of digital boards don't give you sufficiently quick access to things.

The telco called it an 'applique', and so do I.

He's using them *in addition to* the main control surface, unless I read him wrong, just as I do.
 
Because lots of digital boards don't give you sufficiently quick access to things.

The telco called it an 'applique', and so do I.

He's using them *in addition to* the main control surface, unless I read him wrong, just as I do.
I think using additional displays was to keep up with the lampies... /nudge, wink

The reason we *need* to use these devices is because budget and small format consoles don't have the real estate for more/bigger displays, more controls with readouts, etc. Or that the venue, in spite of being a space for performance, lacks a suitable place/space for the hardware needed to perform the craft of audio.

Bah AND humbug.

Yeah, I know that theater, various type of presentations and entertainment take place in a wide variety of "venues" and under a variety of circumstances... but if I have to juggle tablets and laptops and bears (oh my!), there's likely a compelling use case or I'm holding out for a "real" console that meets my workflow needs.

And that brings us to this: mixerpersons like me are dying out (literally) and if you've watched kids using touch screens and other non-traditional UI, they're fast. It's what they grew up with and got good at. There will continue to be a need for physical controls but UI development will continue to favor greater use of "soft" controls and displays. /digital crystal ball...
 
Yeah, they're fast... but that's not the entirety of the thing, IMHO.

Even if you're fast, having to waste the time paging through a poorly structured UI to get to the controls you *need this millisecond* is not gonna fly in the long run. It's not about hard v soft controls.

It's about being able to have multiple *views* of the soft UI, focused in different places.

When I'm mixing an auditorium zoom call, for example, the sends *and* returns are both on the input layer, but I need immediate
access to the masters layer as well, possible *while I'm moving the handles*. So I focus the iPad on the master layer and Bob's my uncle. (Bob was actually my dad, but that's not important now)
 
Yeah, they're fast... but that's not the entirety of the thing, IMHO.

Even if you're fast, having to waste the time paging through a poorly structured UI to get to the controls you *need this millisecond* is not gonna fly in the long run. It's not about hard v soft controls.

It's about being able to have multiple *views* of the soft UI, focused in different places.

When I'm mixing an auditorium zoom call, for example, the sends *and* returns are both on the input layer, but I need immediate
access to the masters layer as well, possible *while I'm moving the handles*. So I focus the iPad on the master layer and Bob's my uncle. (Bob was actually my dad, but that's not important now)
Your dad is your uncle? That's either a soap opera plot or family reunions get really interesting. /nudge, wink. Obligatory nod to song "I Am My Own Grandpa" (YT search hint).

The workflow you propose (and I fully understand the need for) is what it is and not substantially different no matter what the control surface - and for me, real faders and buttons and knobs whenever possible. I don't have to look down to know my fingers are on the faders and not crept off to the bus assignments or mute button, as when mixing on glass.
 
It's part of the reason why the M7CL still is being held onto in some places as a monitor desk just because its got all the faders. That being said, layers are lightning fast these days on even entry level digital boards. You just need to know the path without needing to physically see it, which is just a matter of learning patterns. I still often use my remote iPad as an extra screen just so I can plunk through an eq or focus on a specific channel. I've worked on a lot of desks and have had a lot of engineers through my space and 90% of the time the issue is slow people, not slow hardware.

That being said, I'd never want to mix a show on a qsc touch mix or a rack mounted mixer that ONLY had a tablet for a surface and I don't love the workflow of the lower end sound craft desks or the patching of the Roland M series. Layers are the name of the game these day and they've "flown" for a long time now. The highest end desks let you customize just about anything, avoiding the the millisecond delays by mapping what you want where you want. But all the main brands/lines I can think of don't have anything vital buried deeper than 1 or 2 button presses. I would argue that the large format analogue mixers are slower because you have to physically extend your arms further to reach the full extent of a desk. I learned on analogue gear and I'll take layers any day of the week because It's faster to keep my hands tighter in with more at my finger tips, a finger move is faster than a full arm extension. Especially once you've built a show and are running all you need on a central bank using scenes and dca's.
 

Users who are viewing this thread

Back