"Vsync" for DMX512?

I would expect a viewer to be intently focusing on animations so I would not be surprised if very small timing glitches were at least subconsciously noticeable. If, however, an audience member is giving your lighting the same full and undivided attention, we have much larger problems.

What, my audience isn't composed of obsessed technical nerds who ignore the action while staring at the fixtures and mentally filtering the sound for sixty-cycle hum like I do (did I say that out loud?)?

For animation/gaming, you've actually got your finger on it. The problem is in the subconscious mind. Actually, to be precise, it's in the lower brain. Discontinuous motion (or lighting) is something we are deeply programmed to notice, as it is often caused by animals that, among other things, might eat us. Have you ever noticed how something that flickers seems more distracting at the periphery of your vision than when it is centered in your field of view? That's the reason. Your lower brain is built with a kind of "spider sense" about signals coming from your peripheral senses, because that's where you might otherwise miss a predator (who might eat you).

Animators, and even game designers, sometimes factor this in. Animators do it by, often, deliberately reducing activity that is outside the center of current interest, to avoid distraction. Gamers include it on purpose, so you'll notice something they want you to see (by having a little flag, or whatever, wave, way off-center of the screen, where not a lot else is moving).

Being an obsessed technical nerd, I suppose I noticed it where others wouldn't, but notice it I did. Since I am familiar with how this problem is solved in another discipline, I thought I'd pursue the question here, to see if that solution (or anything like it) is available. Or, if not, what solution (if any) exists.
 
Last edited:
As I think some people already said, you're looking for Double Buffering:

Maintain 2 complete sets of internal levels in the software, and make sure you're always writing the one which the hardware is *not* writing out.

Precisely. But in video and disk i/o, that requires some way of knowing when the hardware is done writing a given buffer out. Is this done by simply relying on a blocking output call? If so, I can take it from there. I'm just very inexperienced with DMX interfaces, with only some work on the Enttec OpenDMX under my belt. With that device, I can do it (I think; haven't actually tried yet). But the OpenDMX relies on the computer for all timing, which I am persuaded is not How The Big Boys And Girls Do It.

But, yes, synchronized double-buffering is what I'm after.
 
Looking at the Enttec DMX Usb Pro API document, on page 5 under "Output Only Send DMX Packet Request", it seems that the device is given an entire DMX frame at a time, which would prevent frame "tearing" if it internally only outputs an entire frame. I don't think this would let you sync your lighting program to the DMX output, but it might give you basically double buffering if the device firmware works as I would guess it does (it replaces the old DMX frame buffer with the new one in-between output frames). Then if you also set the DMX output rate using "Set Widget Parameters Request" (page 4), you could either try to maintain a rate close to that in your program, or try to make your program run at a multiple of that, which would try to minimize the jittering.
 
I believe this topic has addressed my question, but I thought I owed it to everyone who has tried to assist me to offer a clearer explanation of the phenomenon than I can provide by words alone. Accordingly, as we are all practitioners in the visual arts, here are some pictures.

To begin, imagine an ideal fade from 20% to 10%, over an interval of one-tenth of a second. Graphing DMX levels over tens of milliseconds, here's how that would go:

Figure1.png

Figure 1

Now, right away we know that's not how actual DMX fades work, as the levels we send must be discrete integers on the range [0, 255]. Also, we can only update our instruments to new levels every so often. So, let us assume our DMX packet transmission rate is 100 packets per second. This means we send an update every 10 milliseconds. Thus, our new graph of DMX levels actually transmitted over the packet number looks like this:

Figure2.png

Figure 2

Because we only send a new packet every 10 milliseconds, some intermediate values we could have used (if we were sending updates more often) get skipped. Still, it's a pretty good approximation of the ideal straight line, with every change being either two or three steps down from its predecessor.

But, that graph above is based on the assumption that the level sent to the instrument in each packet is computed to be the closest approximation to the ideal value that it can be, for the moment the packet is sent. That would be the case if, somehow, the computation of the value and the transmission of the value were simultaneous (and, thus, perfectly synchronous). The red lines in this graph mark the moment at which the level value is calculated:

Figure3.png

Figure 3

This discussion has been all about what happens, however, if the computation of new levels is done appropriate to the moment of the computation, and not appropriate to the (future) moment whent the level is sent. For example, suppose the computation took place 127 times each second. That's faster than the 100 times per second frequency of packet transmission, which means that nearly all computations will result in values that are sent some time after they are computed and will be, at the time they are sent, higher than they should be (because we are fading down, not up). Here's what happens when you calculate the level for the time the calculation is done, then send that level in the next subsequent future packet, when the calculations are done 127 times per second, and the packets are sent 100 times per second:

Figure4.png

Figure 4

This graph shows that the level sent in each packet is the level most recently computed before the packet was sent, and that the level computed was the closest possible DMX level from the ideal line at that moment of computation. As a result, the values are mostly all a bit high (because they are sent long enough after they are computed that they should be lower), and they step downward through the fade erratically, sometimes stepping down two levels (at packets 1, 2, 3, 5, 6, 7, and 9) and sometimes stepping down four levels (at packets 4 and 8).

A good specific example of why this is so problematic can be seen at packets 7 and 8. The value for packet 7 is computed very shortly after packet 6 is sent, where the red line just to the right of "6" on the X axis intersects the green ideal fade line at about 34.9. The DMX level computed is 35, which is the most recently computed level when packet 7 is sent. Almost immediately after packet 7 is sent, a new value is computed where the red line just to the right of "7" intersects the green ideal fade line. But, this value is never used as, before packet 8 is sent, another calculation is performed by the unsynchronized update code, where the red line just to the right of "8" intersects the green ideal fade line at about 30.9, yielding a DMX level of 31. This becomes the most recently computed value before packet 8 is sent, so that's what packet 8 contains. Thus, we see that synchronized calculation gets us only steps of 2 and 3, but unsynchronized calculation gets us steps as big as 4, because, as the updates are calculated longer and longer before they are used, they get more and more out of date until, eventually, the next unsynchronized update is computed very shortly before it is used (that's what happens just before packed 8 is sent), and the update suddenly "catches up" (but not completely) to the actual packet being sent.

That erratic behavior resulting from unsychronized computation rates and transmission rates is what, in computer animation, is called "janking." You can see it, believe me. With the whopping big steps I am seeing in the lower levels on my crummy LED lights, it makes their poor low-level performance look even worse.

Now, for video animation, you need some help out of the hardware to synchronize alternating buffers. With a DMX adapter, however, this discussion has me thinking I can synchronize by waiting on a blocking write call to transfer my latest complete buffer out to the universe. I'm pretty sure my OpenDMX will do this, but I'll have to look at some other specs to know if it will work for other devices. Regardless, particularly with LED instruments, I don't think I can just ignore it. (My wife discussed this with me and had the nifty idea that a DMX receiver could inform the computer as to when a packet had gone out; I'll keep that in mind if all else fails. She's bright, my wife!)

Hope this set of graphs helped clarify some of the murkier parts of my question. As always, thanks for the CB support!
 
Last edited:
FWIW: This is most troublesome at short fade times, and that's when people won't notice anyway.

The only time it's really bad at long fades, I suspect, is when you can *see the faces of the lights*, and *they're on different, but 'synchronized' channels*... and they don't all move simultaneously.
 
LED stage lighting is still in it's infancy. Yea, we've been dealing with it for a number of years, but in the great scheme of things, DMX is not a good format and I suspect future fixtures may function with a language similar to vector graphics, where and instruction is sent rather then values. Let's face it, both the board and the fixture in the system are microprocessor based and things have come a LONG way since DMX was introduced. Standards are hard to change from. Witness the Edison lamp base, still in almost every home 100 years after its introduction. DMX worked great in the days of conventional dimmers and filament based lamps. It's slow stream of bits more than kept up with the operator's hand. Now we have the need for large scale data dumps, and we can reliably port data at 1Ghz. I keep waiting for someone to reinvent this worn out wheel. It is long overdue, but whoever takes the first step takes on a lot of risk. Yet, let's face it, the processing power and data transports of modern computer equipment make our standard way of doing things look foolish!

Instead of a series of numbers, "80% goto 12% 500Ms"
 
Last edited:
FWIW: This is most troublesome at short fade times, and that's when people won't notice anyway.
Why's that? In my example, I went from 20% to 10% in 0.10 seconds. But that could just as easily be part of a full-to-out fade that was a full second long, wherein the inconsistent stepping I illustrated above would be identical.
 
Standards are hard to change from.

So true. A large installed base can be very inertial. In my humble opinion, one of the most impressive engineering achievements of the twentieth century was the addition of color to black-and-white televsion broadcasting. Not only did they add color to a system not designed for it, but they did it in a way that allowed existing black-and-white receivers to receive color broadcasts in black-and-white. Anything else would have relied on the entire television-watching community to be willing to go out and buy a new (and more expensive) receiver.

But, only about a half-century later, we did just that, when the last "standard definition" transmitter in the western world shut down, and everyone's existing television became useful only as an amateur radio experimenter's gadget or else as a nightstand. HDTV was just too good to wait for any longer in the name of backward-compatibility.

I suspect DMX512 will fade way via some hybrid of those two historical examples. That is, we'll soon start using gear that sends scheduled changes to instruments that can tell time and do their own dimming while, at the same time, we'll outfit our existing DMX racks and instruments with "converters" that receive those scheduled changes and transmit them as DMX512 commands. That will let people continue to use their installed gear while transitioning to the new stuff, whatever that turns out to be.

In the meantime, I'm trying to squeeze the best performance I can out of what I've got, for as little money as possible. For me in particular, that means using my own software to run these crummy Chinese lights, which further means I need to synchronize their DMX frames with their change computations. But, I think this discussion has shown me a couple of ways I might do that. More later...
 
at the same time, we'll outfit our existing DMX racks and instruments with "converters" that receive those scheduled changes and transmit them as DMX512 commands. That will let people continue to use their installed gear while transitioning to the new stuff, whatever that turns out to be..
I am reminded of all the D-A set-top boxes that allow old analog sets to function with digital signals.

That is the way I see things going, especially considering it already happens inside the console in one layer of software, with a second layer converting to DMX so the equipment will understand. Only two roadblock stand in the way:
1) Installed base of equipment.
2) Coming up with a standard.
Judging by history, #2 will be the hard one! Certainly, all of the technology already exists.
 
Erm, both Art-Net and sACN use UDP not TCP...
Now returning you to your regularly scheduled viewing...

I am reminded of all the D-A set-top boxes that allow old analog sets to function with digital signals.

That is the way I see things going, especially considering it already happens inside the console in one layer of software, with a second layer converting to DMX so the equipment will understand. Only two roadblock stand in the way:
1) Installed base of equipment.
2) Coming up with a standard.
Judging by history, #2 will be the hard one! Certainly, all of the technology already exists.

There has been an attempt to come up with a standard (ACN). And it failed miserably ( not streaming ACN which is taking hold ). It failed IMHO because there was not a clear business driver for the standards and it succumbed to the second system syndrome. ( trying to do too much)
 
By the way, if anyone here (or who finds this discussion later) wants to play with the arithmetic, here are the expressions I used to create the graphs above:

The level sent in a particular DMX frame is:

Round(L(Int(nft)/f)

Where:

Round(x) = integer closest to x
Int(x) = greatest integer equal to or less than x
L(x) = level value on the range [0, 255] sent at time x
n = DMX frame number (starting at 0)
f = update calculation frequency in Hertz
t = interval between DMX frames in seconds

And, in particular:

L(x) = Is - x(Is - Ie)/d

Where:

Is = intensity at the start of the fade
Ie = intensity at the end of the fade
d = duration of the fade

This assumes the fade begins, the first calculation is done, and the first DMX frame is sent, all at the same time, with the first DMX frame containing the result of the first calcuation.
 
By the way, if anyone here (or who finds this discussion later) wants to play with the arithmetic, here are the expressions I used to create the graphs above:

The level sent in a particular DMX frame is:

Round(L(Int(nft)/f)

Where:

Round(x) = integer closest to x
Int(x) = greatest integer equal to or less than x
L(x) = level value on the range [0, 255] sent at time x
n = DMX frame number (starting at 0)
f = update calculation frequency in Hertz
t = interval between DMX frames in seconds

And, in particular:

L(x) = Is - x(Is - Ie)/d

Where:

Is = intensity at the start of the fade
Ie = intensity at the end of the fade
d = duration of the fade

This assumes the fade begins, the first calculation is done, and the first DMX frame is sent, all at the same time, with the first DMX frame containing the result of the first calcuation.

So, if I understand your application question is "How do I a large number of LED lights to dim smoothly (or simultaneously - I'm not sure which of these is the priority in your discussion)?"

Most of the time-based discussion has addressed the "simultaneity" issue (which is likely only important with groups of instruments), but not necessarily the smoothness issue (which impacts even a single instrument).

For the smoothness issue, looking at your graphs, isn't there also the "transfer function" of the instrument (i.e. DMX=128 -> DMX=127 results in a change in light output of ???% or lumens, or whatever unit is most useful).
So, the non-linearity of this relationship would seems to be as much of a concern as the issue you've described here - especially over very short time frames (e.g. <1s).

Out of curiosity, is this visual effect something that you've seen or is this discussion more theoretical/academic? If this is an artifact that you've seen, it might be interesting to understand the context (e.g. are you actually doing full-motion video, or is it a very large number of linear pixels, etc.).
 
Out of curiosity, is this visual effect something that you've seen or is this discussion more theoretical/academic? If this is an artifact that you've seen, it might be interesting to understand the context (e.g. are you actually doing full-motion video, or is it a very large number of linear pixels, etc.).

I can see it. It's visible with a single light as not only big jumps in intensity at the low end (which cheap LEDs do a lot, I'm told), but also as occasional "double jumps," for the reason my discussion on janking addressed. Now, I am using non-professional, open-source software that, as far as I can tell, does nothing to address janking. A commercial console could well be synchronizing its fade interpolation steps with its DMX frame transmissions, in which case you'd still see the big jumps that cheap LEDs exhibit, but you wouldn't see the double jumps that janking causes.

If anyone associated with a commercial console vendor has some inside knowledge to share on this, I'd love to read it.
 
I guess I haven't been making myself understood in previous posts.

AFAIK, no console makers attempt to handle the DMX refresh rate issue. It maxes out at 44 Hz and 8-bit resolution, unless the fixture supports a higher resolution in which case they may send an 16-bit value, but no faster than 44 times per second so there will still be a step.

Many fixtures implement smoothing internally and selectable dimmer curves that govern how the smoothing works. Examples were given in earlier posts. The industry uses the term smoothing, not janking or any other video gaming term to refer to interpolating the values between 2 DMX packets. I don't know of any fixture that tries to guess the next value to arrive, they all likely implement smoothing as going from the last value they had latched to the value they just received during the gap before the next packet is to arrive.
 
(e.g. are you actually doing full-motion video, or is it a very large number of linear pixels, etc.).
Ah, I mistook your last question for being about theater lights, but you specified video.

Yes, not only have I seen janking/tearing in video, it is not a matter of speculation on my part. Google "janking" and you'll get lots of info on it, though the term is not universally used in every discussion of the phenomenon it names.

Keep in mind that, even if you buffer up whole frames and never update the frame being displayed, you will not be doing enough to avoid janking. By refraining from updating the buffer currently on display, you will avoid tearing, but if you don't synchronize your animation updates with your video updates, there will always be some irregularity in the apparent motion. Even moving a single pixel across your display will show a pronounced unevenness if you don't synchronize.

(If you do look up discussions about this, prepare to see a lot of unsupported claims that "it isn't really a problem." People seem to think that a lot of fake "solutions," most of them based on flawed understandings of sampling, reduce the problem to a level below the threshold anyone would care about. These people haven't tried those solutions. I know because I was one of them until I wrote some actual code and found that, yes, it is very easy to see the mismatch if you don't synchronize your animation updates with video updates.)
 
Ah, I mistook your last question for being about theater lights, but you specified video.

Yes, not only have I seen janking/tearing in video, it is not a matter of speculation on my part. Google "janking" and you'll get lots of info on it, though the term is not universally used in every discussion of the phenomenon it names.

Keep in mind that, even if you buffer up whole frames and never update the frame being displayed, you will not be doing enough to avoid janking. By refraining from updating the buffer currently on display, you will avoid tearing, but if you don't synchronize your animation updates with your video updates, there will always be some irregularity in the apparent motion. Even moving a single pixel across your display will show a pronounced unevenness if you don't synchronize.

(If you do look up discussions about this, prepare to see a lot of unsupported claims that "it isn't really a problem." People seem to think that a lot of fake "solutions," most of them based on flawed understandings of sampling, reduce the problem to a level below the threshold anyone would care about. These people haven't tried those solutions. I know because I was one of them until I wrote some actual code and found that, yes, it is very easy to see the mismatch if you don't synchronize your animation updates with video updates.)

No, you actually previously interpreted my question correctly being about theatrical/show lighting. I'm very familiar with the artifacts you mentioned in the world of full-motion video, but I've never witnessed the simultaneity issue it in theatrical/show lighting, so I never considered that it could be an issue (of course, the smoothness issue is well-known, but has been mostly addressed with higher end instruments). However, I don't do fast paced shows with many lights which is where I would expect that the simultaneity issue would be more pronounced.
 
AFAIK, no console makers attempt to handle the DMX refresh rate issue. It maxes out at 44 Hz and 8-bit resolution, unless the fixture supports a higher resolution in which case they may send an 16-bit value, but no faster than 44 times per second so there will still be a step.

Doesn't it bottom out at 44Hz? I can send a DMX frame a lot more often than that, if I don't send 512 values. In fact, IIRC, I must send my frames a lot more often than that if I am only sending a small number of channels, as long delays can be interpreted by the receivers as streaming having stopped entirely (in which case everything goes out).

Many fixtures implement smoothing internally and selectable dimmer curves that govern how the smoothing works. Examples were given in earlier posts. The industry uses the term smoothing, not janking or any other video gaming term to refer to interpolating the values between 2 DMX packets.

Janking and smoothing are not related. If I understand you, smoothing is something an instrument does to introduce intermediate levels as it transitions from one DMX 8-bit level to another. This would be a great solution to the visible jumps I see in the low ends of my cheap LED lights' intensities.

Janking is what you get when you compute interpolated values at a regular rate and display the results of those computations at a different regular rate. As my graph at Figure 4 shows, the displayed values gradually grow more and more out of step, falling farther and farther behind what they should be, until two steps are computed and the level displayed partially catches up. In video animation, this is solved by clocking the interpolation computations with a signal provided by the display hardware, such that the display rate and the computation rate are the same.

In the context of theatrical lighting, the interpolation would be between the starting and ending levels of an instrument's parameters at the beginning and ending of a fade. Again, as per Figure 4, if the interpolation rate is not synchronized to the DMX frame transmission rate, the levels sent to the instruments will gradually fall farther and farther behind what they should be at the moment the DMX frame is sent until, in the same way, they (mostly) catch up. The solution here would be to simply block the interpolation loop after it completes a cycle until the most recently transmitted DMX frame has completely gone out to the universe. This would synchronize the two loops, and solve the problem. Now, like I said, I'm seeing it in some amateur software. Professional manufacturers may already be clocking their interpolation loops by blocking against the frame transmission, in which case there is no problem to solve. As that would be a very natural way to drive that loop, it may be that this is how it has been done for a long time, and the "problem" I'm addressing never had a chance to show up in the first place.
 
Last edited:

Users who are viewing this thread

Back