"Vsync" for DMX512?

Stevens R. Miller

Well-Known Member
This will be a hard question to ask as I probably am not acquainted with the proper terminology. Let's see if I can make myself understood. Let me start with a comparison to something I (think I) do understand pertty well:

Video is created by painting a series of still images on a monitor screen at a rate high enough to fool the brain into thinking it is seeing continuous motion. Typically, the rate is constant, maybe 60Hz. When each image represents a scene as it was at a fixed point in time, and the images are each representative of the scene as it changes from one of those fixed intervals to another, we see smooth, natural motion on the screen.

So, if a video camera captures frames at a constant rate of 60 per second, and we play those back at 60 per second, we, for lack of a more sophisticated way of putting it, get what we want.

Video games face a problem not faced by video recorders: the scene might change wihle a frame is being drawn onto the monitor. That's because the frame isn't presented on the monitor all at once. Instead, it typically starts being drawn from left to right at the top-most row of pixels, then continues from left to right across the second row down, and so on, usually using up the entire frame interval to complete the process. This means that if a computer is generating the frames, the frame last generated by the computer may be replaced by a new frame, while the monitor is part-way through the process of drawing the frame. Lots of video games show this problem and, because of the way it looks, it is often called "tearing."

Here's a really good example:

proxy.php


To solve this, a game can be written to draw its frames into a buffer, then make that buffer available for use after the current frame is completely presented, and with the guarantee that it will not be changed after being made available. The game program alternates between two such buffers, building the next image in the buffer you don't see, while leaving untouched the buffer you are seeing at any given moment. The feature that makes this possible is called "Vsync," for "video synchronization." It requires support from the graphics display hardware, and cooperation from the game software. Without both, tearing is inevitable. This phenomenon, where two discrete sequences are unsynchronized and irregular visual effects appear when the sequences are somehow merged is more generally known as "janking," a time-based cousin to the space-based aliasing in computer graphics everyone knows as "jaggies."

I am sure you are wondering, if you have read this far, what this has to do with theatrical lighting.

Well, I have written some software to dim lights via DMX512. Until recently, all the lights I have worked with used halogen bulbs. These don't change brightness very fast. From "full" to "out" takes about a second. "Out" to "full" is a bit faster, but still not instantaneous. All other transitions from one level to another also take some time, owing to the simple fact of that being how halogen bulbs behave. But now I am dealing with LEDs, which change intensity virtually instantly. For crummy LEDs, that often have strikingly non-linear dimming curves, this creates a noticeable "stepping" at the low end of their brightness. A change from, say, 20 to 21 results in a sudden visible jump up in the lighting. (A lot of you warned me about this; you were right.) A halogen bulb, if it even acts this way, hides the jump by virtue of its own inherent smoothing of transitions from one level to another (a result of the fact that halogen bulbs just don't change intensity instantly).

Now, unless there is an LED light out there that will interpolate internally from one DMX eight-bit level to another (is there?), I'm going to be stuck with this. But, it gets worse if my software computes changes at one rate, and updates sent to the LED occur at another rate. Roughly, a full DMX512 universe gets updates 50 times per second. If my software computes changes 50 times per second, but does not synchronize changes with the transmission of DMX512 frames, I'm inevitably going to miss some changes (that is, two frames that should send different values to an instrument will actually send the same value, because my computed change comes a litte too late), and I'm inevitably going to catch up on the missed ones by skipping the ones I missed (that is, two frames that should send close values to an instrument will send values that are greater in difference than they would be if my changes and my DMX512 frames were synchronized). (NOTE: This topic often provokes the suggestion that the changes should just be computed twice as often as the frame rate, or some other multiple. Instead of going on at even more tedious length, I will just say this doesn't solve the problem.)

This not only means I am dealing with visibly discrete changes in my LED lights, it also means that they fade unevenly, with some pairs of frames not changing them at all, and others changing them more than I wanted, to catch up with the ones that didn't change (or vice versa). That's a form of janking. With halogen lights, this never made itself apparent to me, since they impose their own "smoothing" effect on changes. Halogens are actually janking too, but you never notice it. With LEDs, you do.

Now, I can probably write my program to synchronize with the section that sends the DMX512 frames. That's going to be a challenge, but I knew the job was dangerous when I took it. My question is this: How do professional systems deal with the problem of synchronizing computed fades with the transmission of DMX512 frames? Is there an equivalent in DMX512 programming to the Vsync used to solve this problem in video games? If so, does it have a name?

Thanks!
 
FWIW, DMX can arrive at anywhere from 0 to 829 times per second depending on the the number of bytes in the frame and the rate of the sender. The protocol allows for a single byte payload.

Many LEDs and moving lights will work at a higher internal resolutions and implement predictive interpolation in their onboard processor to smooth the step. It is fairly common to find 14 or 16-bit internal resolutions and 400-10000+ Hz refresh rates within the fixture.
The onboard drivers and will ramp between transitions based on the history of the last frame(s) that were received with smoothing on small deltas and little or no smoothing on large deltas.
 
DMX spec has a standard transmission rate. Assuming your DMX source is functioning within spec, you should always know what the DMX frame rate is. If your receiver is dropping frames, or can't decode the data and pass it on to the LED driver fast enough, that seems like the real issue is in the receiver/decoder. I suppose if you wanted your receiver to buffer the data before passing it on to whatever it is controlling, you could do that, but since there is no error checking in DMX, if you miss a frame, you aren't going to get it back.

Consider that at the DMX refresh rate of about 44Hz, if you fade from 0-FL in a 1-count, you can only transmit 44 data points. As such, the transmitter has to calculate a curve to choose which values to send, and ultimately the receiver has to either accept that raw data or be programmed to smooth the fade. Of course the interesting issue is that DMX receivers are inherently dumb, and they have no idea where the target level is, they only can react to current data. So, unless you wanted to buffer the data, calculate a smooth fade, and introduce latency into the system, you pretty much just have to accept the packets as they come and feed the level information to whatever you are driving.
 
Having written a lighting control program myself, I think you'll find that you're not going to be able to smooth out an LED fixture via the software. You'll see the difference between, say, DMX value 2 and DMX value 3 no matter what you do. More expensive LED fixtures interpolate what the pulse width change from 2 to 3 is and 'send' the LED what would be a 2.1, 2.2, 2.3, etc. internally. That is my understanding anyway - if you crack this nut, please let everyone know. Thank you for the detailed video explanation in any event.
 
Having written a lighting control program myself, I think you'll find that you're not going to be able to smooth out an LED fixture via the software. You'll see the difference between, say, DMX value 2 and DMX value 3 no matter what you do. More expensive LED fixtures interpolate what the pulse width change from 2 to 3 is and 'send' the LED what would be a 2.1, 2.2, 2.3, etc. internally. That is my understanding anyway - if you crack this nut, please let everyone know. Thank you for the detailed video explanation in any event.

I agree that there is no software solution to the quantized intensities LEDs show at their lower levels. Because of that quantization, however, the janking effect is even more pronounced than it would ever be with halogen lights. To beat it, one must double-buffer the DMX frames, and synchronize their updates so every frame actually changes every channel to the level closest to what it should be when that frame goes out. If you don't synchronize the buffers, janking is unavoidable.

A couple of you have mentioned that some LEDs internally add intermediate levels to adjacent DMX levels, to smooth out transitions. That's good to know. Can you point me to a spec or two for actual hardware that does this?

The remaining question I have is about coping with Janking in the console. Does this get addressed by any manufacturer? Or is something that's always been done by synchronization, so no one feels the need to brag about it?
 
For crummy LEDs, that often have strikingly non-linear dimming curves, this creates a noticeable "stepping" at the low end of their brightness. A change from, say, 20 to 21 results in a sudden visible jump up in the lighting. (A lot of you warned me about this; you were right.)
Yeah, that's why 16 bit dimming is catching on.

Some LED fixtures will have a control channel to change the internal fade rate, but I'm guessing yours don't. For instance, the Chauvet Rogue R2 wash has two selectable fade rates. The Hex 9 has several. The Elation fixtures use a menu option instead of a DMX channel. The ColorSource PAR claims, "15-bit virtual dimming engine provides smooth, high-quality theatrical fades." The Chauvet Ovation profile fixtures have a setting for both the dimming speed and the curve.


The only thing I can think of in your case is to increase your transmission rate by decreasing the number of transmitted channel to the bare minimum. Hopefully, these fixtures can handle that.
 
Last edited:
This device (https://www.amazon.com/dp/B075FHJM35/?tag=controlbooth-20) offers 16 bit processing for raw leds. I would be interested in what the folks at Enttec and DMX King have to say about what you call janking. The timing of their internal microprocessors would be independent of what the software would be sending.
That's the point of Vsync. To be able to synchronize, you need a signal from the display hardware (or, in this case, the DMX adapter) that lets your code know when a new frame is about to be sent (or, a little better, when the most recent frame has been completely sent). This means that the animation (or fading) frame rate is being clocked by the output device. For a large universe, that's going to be about once every 20ms. For a smaller universe, that could be as short as a few hundred microseconds. Now, as your universe gets smaller, there's less to be computed, so the shorter time is kind of made up for by there typically being less to do. However, if one's software just can't keep up, then it needs to start sending updates every second (or third, or fourth) signal it gets from the output hardware, and compute the new levels for the longer intervals.

For example, if you are sending DMX frames once every millisecond (which you could do if you had about 40 channels in your universe), and you were able to compute a frame within a millisecond, each new frame you computed would change values from its predecessor by whatever was appropriate for a one-millisecond interval. So, if you were going to fade from full to out in one-tenth of a second, you'd send about a hundred frames, with values (in the appropriate channel) starting with 255, 252, 250, 247, 245, and so on, down to 10, 8, 5, 3, 0. Those values would be as close as you could get to the actual interpolated value for the amount of time that has passed since the fade started before the frame containing each value was sent. If, however, your code detects that it just can't keep up, it would need to adapt itself by waiting for every second signal before indicating that a new buffer of levels was ready to send. That would give your code two milliseconds instead of just one to compute each new frame, but each new frame would have to set a level appropriate to the amount of time that had passed since the fade began which, in this slower example, would be twice as great a change as before. So, instead of a hundred frames sent over the course of one tenth of a second, you'd compute fifty frames, with values starting with 255, 250, 245, 240, 235 and so on, ultimately ending with values like 20, 15, 10, 5, 0. Note that, in this second case, every frame is only computed once, but it is sent by the output hardware twice. In both cases, however, the computation and the transmission are synchronized so that changes occur on a regular interval, not inconsistently (which is what janking does). (Interestingly, note that owing to integer rounding, the second, slower case tends to use the same step size from one frame to the next, whereas the finer-grained faster case is forced to round up or down in a way that makes the step sizes inconsistent more often; hard to say which, if either, would be preferable). (This business of adapting computation frame rate to output frame rate is why a lot of computer games seem to take a long time to start up. In addition to any data loading they are doing, many of them run a quick internal test to see if the computing hardware can keep up with the display hardware's frame rate. If not, the software does what I've described, and only provides a new image to the display on every second, or later, display frame. Or, in some cases, the game just gives up on synchronizing and makes you live with tearing like in my example picture.)

Now, the very low-end Enttec device, the OpenDMX, relies on the computer itself to do the transmission timing. As I believe you pointed out to me once before (although I did not understand it at the time), this is problematic because the computer may not be able to keep up with the timing tolerances the DMX512 spec requires. A delay in sending a channel value might be interpreted by the rack (or the lights themselves, if they have their own dimmers) as the "break" that indicates a frame is done, with the rest of that frame being interpreted as the start of a new frame altogether. However, because the OpenDMX requires the computer to take responsibility for sending each frame, it would be easy to add a line of code to the C (or whatever) source that makes the "write" call such that it also signals a blocked thread in your dimming code. The blocked thread would unblock and proceed to compute the next frame, blocking again when it was done. This would provide the synchronization I am looking for.

The question I have is, for equipment where your computer isn't managing the frame transmission itself, is there any way to get this same signal as each frame is sent?
 
Last edited:
Oh, I see what you're saying - it's the synchronization between the PC and the adaptor that's the issue, not the actual DMX frame.

I that case, I got nothing; all the DMX work I do is at the actual microprocessor level tied to the RS485 transceivers.
 
In all honesty, coming at this as someone who works in embedded devices every once in awhile, I'm not sure the issue you're focusing on has the potential to improve your product for the end user. A lot of out of the box DMX implementations clock your dynamic values into a buffer to prevent the transmitting side from updating the frame part way through transmission (I have personal experience doing it with Atmel and Altera devices). Even if that wasn't that case, if you had fixtures next to each other that get out of sync for a single frame of DMX I'm not convinced that the human eye will perceive the 1/40th of a second timing difference between a fixtures changing. An ugly 8-bit fade is noticeable but if one fixture ends up 25ms ahead of the one next to it that's just not something that will be noticed unless you're REALLY looking for it.
As far as what happens at the other end of the cable, RS485 processing at the speed we're sending it is pretty trivial even for pretty cheap controllers. It's a matter of what the fixture's manufacturer decides to do with that information. If the fixture splits it's PWM sequence up for 8-bit dimming there's not a whole lot you can do about it except recommend the purchase of better fixtures. As previously mentioned, several manufactures at all price points are pushing the market towards 16-bit dimming, but if the 360Q has anything to teach us we can rest certain of the fact that cheap or misguided fixtures that only do 8-bit dimming will continue to exist for many, many years to come.
 
A lot of out of the box DMX implementations clock your dynamic values into a buffer to prevent the transmitting side from updating the frame part way through transmission
That actually makes the problem worse. (Because the delayed update will include a level that was computed for an earlier time, so it becomes late as well as being discontinuous.) Now, if my code can find out when the next frame will be sent, it can compute the level appropriate to that time, and (if it can adapt to the frequency of transmission) that becomes a solution. But, if an adapter just sends each buffer intact after it gets it, with no way for my code sync with it, that makes it worse.
I'm not convinced that the human eye will perceive the 1/40th of a second timing difference between a fixtures changing. An ugly 8-bit fade is noticeable but if one fixture ends up 25ms ahead of the one next to it that's just not something that will be noticed unless you're REALLY looking for it.
I'm coming at this from my experience as an animator and will respectfully disagree. Unsynchronized events that should be matched are easily detected by the human eye at remarkably tiny intervals. (Note that the 1/40th-second you mentioned is below the flicker threshold, and janking is actually visible well above the flicker threshold as well.)

But, to be clear, I'm not addressing that question. Janking is not about syncing two visible things. It's about continuous changes over time to one thing. If you don' t sync the animation changes (or, in the case of dimming, the level changes) to the visible updates (video frame painting for animation, DMX frame transmission for dimming) you get discontinuous changes that the eye does notice.
 
Unless your output device has a command to wait until it receives a message, all you *can* do is process data faster than it can output DMX, which is inherently a race condition.

In the past, I've done it at the microcontroller level by holding a frame until I sent a special character.

(Not the most efficient implementation, but I was sending ASCII characters so that I could order the MCU to set predefined RGB values to a set of three channels.)

The output string from the computer to the controller was thus a string that looked something like this: rr.rg.rgb.rbgb.rbg.brbg.brb!

in this implementation, each character is an RGB preset for a pixel.
r = full red
g = full green
b = full blue
w = r, g and b to full (white)
. = pixel off
! = Push data to output
(There were many more preset values, but in the interest of not taking forever to post, I'm not going to.)
 
Steven
Seems to me that there are two issues you are talking about here. One is the crappiness of many LED fixtures in the dimming curve. The low end pops on to quickly and appears very steppy. Some manufactures try to address this using 32 bit ( instead of 8 bit) dimming. Many of these are STILL steppy on the low end. ( I remember taking a look at an Altman Chalice on 32 bit a few years ago. The fixture popped on at a DMX value of about 10. Not smooth at all). The two manufactures that I am familiar with who seem to have solved this issue are ETC and Chauvet. ( may well be others but I have not taken a close look) .

As to the synchronization issue - to solve it you probably need to utilize threads. Here's what I did in the product that I wrote a few years ago.

I have a FadeEngineThread that is figuring out the proper values that my fixture(s) need to put out. ( note - this is not the same as a dmx value form 0 to 255. See discussion about 32 bit fading).

I have a PublishThread that is pushing out data to my DMX protocol converter ( IE a Dongle, or artnet to DMX hardware device, etc).

I have three buffers in my program. The FadeEngine buffer, the Publish buffer, and the Hold buffer.

When the FadeEngineThread is computing values, it fills up the appropriate DMX values in the FadeEngine buffer as it computes them. When it has completed a pass for all fixtures, it flushes the values in the FadeEngine buffer to the Hold buffer.

The Publish Thread checks to see if any data has changed in the Hold buffer and if so moves the contents of the Hold buffer to the Publish buffer. It then squirts that info out to your DMX converter via whatever protocol you are supporting.

You put in a synchronization block so that while the FadeEngineThread is updating the HoldBuffer - the PublishThread will block ( or give up and just publish the current values again) - and that while the PublishThread is moving data to the Publish buffer the FadeEngineThread will block. These flush operations are quick enough that the slight delay is not an issue.

So that's basically what we did for our low end system.

To my knowledge there is no attempt ( or any way) to synchronize the DMX hardware device with the console. Enttec and DMXKing ( to the best of my memory) do not support it. ACN and Artnet use TCP which is by definition a streaming protocol. (There have been some attempts to work around issues of TCP packets arriving out of order - but that was never a problem in the kinds of systems I was working with.)

I assume having three buffers with multiple threads is a more or less standard way of solving the problem from the console standpoint. ( if not it should be IMHO)
 
A quick summary for the main problem in the thread:

There is no way to sync the DMX output device with your software, unless you code an output device yourself.
 
That actually makes the problem worse. (Because the delayed update will include a level that was computed for an earlier time, so it becomes late as well as being discontinuous.) Now, if my code can find out when the next frame will be sent, it can compute the level appropriate to that time, and (if it can adapt to the frequency of transmission) that becomes a solution. But, if an adapter just sends each buffer intact after it gets it, with no way for my code sync with it, that makes it worse.

I'm coming at this from my experience as an animator and will respectfully disagree. Unsynchronized events that should be matched are easily detected by the human eye at remarkably tiny intervals. (Note that the 1/40th-second you mentioned is below the flicker threshold, and janking is actually visible well above the flicker threshold as well.)

But, to be clear, I'm not addressing that question. Janking is not about syncing two visible things. It's about continuous changes over time to one thing. If you don' t sync the animation changes (or, in the case of dimming, the level changes) to the visible updates (video frame painting for animation, DMX frame transmission for dimming) you get discontinuous changes that the eye does notice.

I would expect a viewer to be intently focusing on animations so I would not be surprised if very small timing glitches were at least subconsciously noticeable. If, however, an audience member is giving your lighting the same full and undivided attention, we have much larger problems.
 
As I think some people already said, you're looking for Double Buffering:

Maintain 2 complete sets of internal levels in the software, and make sure you're always writing the one which the hardware is *not* writing out.
 
Unless your output device has a command to wait until it receives a message, all you *can* do is process data faster than it can output DMX, which is inherently a race condition.
Double-buffered video display adapters do it the other way: your code waits until it receives a signal from the adapter that it has sent a buffer which, therefore, you can now write to. If you signal the adapter that the buffer is ready for use before the next frame, the adapter will send that buffer. If not, the adapter reuses the other buffer. The output device never waits. It sends a buffer on every frame, on a regular schedule. But you can't write to the buffer it is in the process of sending. You write to the other of the two buffers, and try to make your code fast enough that it will complete all updates to that buffer, and signal the adapter that the buffer is ready for use, before the buffer currently being sent has been fully transmitted (meaning that the adapter will then go on to send a new frame).

Put another way:

1) While the adapter is transmitting Buffer 1, your code is writing a new frame to Buffer 2.

2) When your code finishes writing Buffer 2, it sends a signal to the adapter that Buffer 2 is ready for transmission, then waits for a signal from the adapter.

3) When the adapter has completed transmission of Buffer 1, if your code has signaled that Buffer 2 is ready for transmission, then it signals your code that Buffer 1 is ready for writing a new frame and starts transmitting Buffer 2, or else it sends Buffer 1 again and repeats this step.

4) When your code gets the signal from the adapter, it starts writing a new frame to Buffer 1.

From here on, the sequence repeats, with Buffers 1 and 2 alternating between being the buffer the adapter is sending, and being the buffer your code is writing a new frame into.

Note that Steps 2 and 3 can occur in either order. That is, your code might still be writing a frame into Buffer 2 (that is, still executing Step 1) after the adapter has completed transmitting Buffer 1. In that case, the adapter just sends Buffer 1 over again.

Note also that I have somewhat simplified this, to avoid some nasty programming problems that are typically solved by reliance on library and system code, and that aren't relevant to this discussion. (And finally also note that there are other approaches that use more than two buffers, but double-buffering is very common and solves the janking/tearing problem.)

Now, if "adapter" in this discussion means the gizmo that sends DMX data provided by my computer, and "frame" means, well, a DMX frame, this discussion would apply just as well to a lighting system as it does to a video game. In both cases, the changes computed by the software are sent in a synchronous manner to the hardware that makes those changes visible (either on a screen or in a can). In both cases, the size of the changes being computed in Step 1 depends on how much time has passed between the frame being computed and the frame computed before that, which your code can easily measure. If your code slows down to the point where it could keep up with the adapter's frame rate, then it catches up by doubling the size of the change in the next frame it computes (or tripling it, or quadrupling it, and so on, if it has fallen more than one frame behind).

This mode of operation is commonly supported in video display hardware, and is also supported in Windows at the operating system level (it is even supported in the Java virtual machine, provided that the actual run-time platform supports it). It long pre-dates fancy video too, as this approach has been used for decades to speed up data transfer between computer memory and disk.

Like I said, for the Enttec OpenDMX, because the code that initiates a transmission actually runs on the host computer, that code could run in its own thread and do the job an adapter would usually do of sending and receiving signals to and from the dimming code. That would synchronize the use and alternation of two buffers. What I'm looking for is a self-clocking adapter that also sends and receives those synchronizing signals.
 
Steven
Seems to me that there are two issues you are talking about here. One is the crappiness of many LED fixtures in the dimming curve. The low end pops on to quickly and appears very steppy. Some manufactures try to address this using 32 bit ( instead of 8 bit) dimming. Many of these are STILL steppy on the low end. ( I remember taking a look at an Altman Chalice on 32 bit a few years ago. The fixture popped on at a DMX value of about 10. Not smooth at all). The two manufactures that I am familiar with who seem to have solved this issue are ETC and Chauvet. ( may well be others but I have not taken a close look) .

You've got it, John. The "steppiness" of LEDs is not something I expect to solve in software. But it is apparent to the eye mostly (I think) because of the fade speed of LEDs (effectively instantaneous), whereas any "steppiness" in a halogen bulb's curve is at least partly masked by the physical nature of the bulb (through its inability to change intensity at all without fading up or down smoothly as the filament cools down or heats up). This, alas, also reveals any lack of synchronization between the DMX frame transmitter and any code that is computing changes at reasonable update rates. You can reduce, but not eliminate, the appearance of this effect by upping the code's computation rate, but that wastes cycles and--for reasons I might be able to explain if I can find the right graphic--never can really solve the problem.

As to the synchronization issue - to solve it you probably need to utilize threads.

Agreed, unless the hardware itself cooperates (see my reply to EdSavoie, above).

Here's what I did in the product that I wrote a few years ago.
...
You put in a synchronization block so that while the FadeEngineThread is updating the HoldBuffer - the PublishThread will block ( or give up and just publish the current values again) - and that while the PublishThread is moving data to the Publish buffer the FadeEngineThread will block. These flush operations are quick enough that the slight delay is not an issue.

I love it! When the PublishThread is blocked waiting for FadeEngineThread to complete an update to the HoldBuffer, how does it (the PublishThread) realize it must give up (and, I assume, unblock) and publish the current values again?

ACN and Artnet use TCP which is by definition a streaming protocol. (There have been some attempts to work around issues of TCP packets arriving out of order - but that was never a problem in the kinds of systems I was working with.)

We will return to a discussion of TCP at a later date:).

I assume having three buffers with multiple threads is a more or less standard way of solving the problem from the console standpoint. ( if not it should be IMHO)

I believe three would be necessary if the output device must always send the same buffer. If you can have it alternate, you can skip the copy step and use two by telling the output hardware to switch buffers. My slight experience with coding for the OpenDMX tells me this ought to work for that device (actually, I am probably being naive, as the "write" call to the OpenDMX may very well copy the buffer you send it internally).

Now, absent a signal from the output device (and I will take your word for it that the available hardware doesn't provide one, as you've clearly got more industry knowledge about this than I do), I think our multi-threaded approach would still work, provided that the clall our code makes to send a buffer to the device can be made to block until the buffer has been committed by the device for transmission. Is that the case, or do contemporary devices use non-blocking writes?
 

Users who are viewing this thread

Back