Digital Mixer Trends/Futures/Philosophy

The thing on the front is a headphone jack with volume, and a talkback input with volume control.

It does not have many built in inputs or outputs, and relies on the Dante integration for most of the inputs.

Specs Sheet is on their website.

What do you think about a 2.5ms delay (Omni In to Omni Out)

Oh, and of course sequential numbering isn't really a big deal to companies. How many years and revisions have we had the Source 4 now?
 
How many years and revisions have we had the Source 4 now?

That number appeared to based on something (number of filaments in lamp), whereas many other model names seem arbitrary. If you want really confusing model names, look at the DiGiCo mixing consoles. The order of the SD-series consoles is the most arbitrary numbering system I've seen to date. The SD7/8/9/10/11 may reflect the order in which they were released, but certainly not the order from smallest channel count to largest, or least powerful to most powerful.
 
What do you think about a 2.5ms delay (Omni In to Omni Out)

It's alright. Right now 2.5ms is not bad, but in 10 years it might be part of the conversation when choosing consoles. A 5D has 2.31ms and the AVID MixRack from Avid has <2.3ms. The only company killing the latency issue is DiGiCo at under 1ms. A friend of mine here at school is working in a lab that has made a AD/DA that can operate at 35 microseconds(0.035ms). Now it can't convert anything close to real sound, but the work is in the pipeline for major improvements. If Yamaha is going to have this as their flagship for too long that number might become a problem, but not for a while.

It does not have many built in inputs or outputs, and relies on the Dante integration for most of the inputs.
Shut up and take my money! What a terrific decision to dump Ethersound for Dante, and chose it over MADI. I think in this scuffel of standards, Dante is going to come out on top.
 
Based upon my experience with digital broadcast consoles, 2 to 3 mS of latency is a big issue if you are using in-ear monitors. In just a few years, the deaf musicians will start filing law suits. Why? The delay causes phase cancellation in the singer's head. The hearing damage happens when they turn themselves up to extremely high levels to hear themselves enough. Long live analog.
 
It's alright. Right now 2.5ms is not bad, but in 10 years it might be part of the conversation when choosing consoles. A 5D has 2.31ms and the AVID MixRack from Avid has <2.3ms. The only company killing the latency issue is DiGiCo at under 1ms. A friend of mine here at school is working in a lab that has made a AD/DA that can operate at 35 microseconds(0.035ms). Now it can't convert anything close to real sound, but the work is in the pipeline for major improvements. If Yamaha is going to have this as their flagship for too long that number might become a problem, but not for a while.
The issue is not just A/D and D/A latency but also latency associated with the processing and routing. In fact the A/D and D/A latency may by a small portion of the overall latency.

You also have to consider if the consoles incorporate some form of delay compensation. A simple example, split a mic to two channels. The A/D latency should be the same for both channels but if one channel is heavily processed and the other is not then the processed channel will have additional latency associated with the processing. If you then combine those two channels on a bus without delay compensation you would have the same signal with phase differential being summed. So, do you allow that? Or do you calculate an absolute worst case potential latency and then apply latency as required to each channel so that every channel has that same latency? Or do you constantly calculate the latency for every channel, determine the worst case and apply an appropriate compensation delay to each channel so that the latency on every channel equals the worst case at any time? A similar situation applies wherever channels or buses may sum and to the outputs (should latency from any input to any output be the same regardless of the path or processing?).

This can get into tradeoffs and each manufacturer will use the approach that works best for their console architecture. Assuming a potential worst case latency may mean assuming a longer latency, but can simplify the math. Keeping track 'real time' of the delays and applying constantly varying delays may allow accommodating shorter 'worst case' delays but requires more processing to do all the related math in 'real time'. Not applying delay compensation or maybe just an overall input-to-output compensation is less complicated and should result in lower latency for most paths, but could be problematic with concepts such as parallel compression where you split a source to two inputs (or an input to two buses), compress them differently and then sum them back together.
 

Users who are viewing this thread

Back