I believe this topic has addressed my question, but I thought I owed it to everyone who has tried to assist me to offer a clearer explanation of the phenomenon than I can provide by words alone. Accordingly, as we are all practitioners in the visual arts, here are some pictures.
To begin, imagine an ideal
fade from 20% to 10%, over an interval of one-tenth of a second. Graphing
DMX levels over tens of milliseconds, here's how that would go:
Figure 1
Now, right away we know that's not how actual
DMX fades work, as the levels we
send must be discrete integers on the range [0, 255]. Also, we can only update our instruments to new levels every so often. So, let us assume our
DMX packet transmission rate is 100 packets per second. This means we
send an update every 10 milliseconds. Thus, our new graph of
DMX levels actually transmitted over the packet number looks like this:
Figure 2
Because we only
send a new packet every 10 milliseconds, some intermediate values we could have used (if we were sending updates more often) get skipped. Still, it's a pretty good approximation of the ideal straight
line, with every change being either two or three steps down from its predecessor.
But, that graph above is based on the assumption that the
level sent to the
instrument in each packet is computed to be the closest approximation to the ideal value that it can be, for the moment the packet is sent. That would be the case if, somehow, the computation of the value and the transmission of the value were simultaneous (and, thus, perfectly synchronous). The red lines in this graph
mark the moment at which the
level value is calculated:
Figure 3
This discussion has been all about what happens, however, if the computation of new levels is done appropriate to the moment of the computation, and not appropriate to the (future) moment whent the
level is sent. For example, suppose the computation took place 127 times each second. That's faster than the 100 times per second frequency of packet transmission, which means that nearly all computations will result in values that are sent some time after they are computed and will be, at the time they are sent, higher than they should be (because we are fading down, not up). Here's what happens when you calculate the
level for the time the calculation is done, then
send that
level in the next subsequent future packet, when the calculations are done 127 times per second, and the packets are sent 100 times per second:
Figure 4
This graph shows that the
level sent in each packet is the
level most recently computed before the packet was sent, and that the
level computed was the closest possible
DMX level from the ideal
line at that moment of computation. As a result, the values are mostly all a
bit high (because they are sent long enough after they are computed that they should be lower), and they step downward through the
fade erratically, sometimes stepping down two levels (at packets 1, 2, 3, 5, 6, 7, and 9) and sometimes stepping down four levels (at packets 4 and 8).
A good specific example of why this is so problematic can be seen at packets 7 and 8. The value for packet 7 is computed very shortly after packet 6 is sent, where the red
line just to the right of "6" on the X axis intersects the green ideal
fade line at about 34.9. The
DMX level computed is 35, which is the most recently computed
level when packet 7 is sent. Almost immediately after packet 7 is sent, a new value is computed where the red
line just to the right of "7" intersects the green ideal
fade line. But, this value is never used as, before packet 8 is sent, another calculation is performed by the unsynchronized update code, where the red
line just to the right of "8" intersects the green ideal
fade line at about 30.9, yielding a
DMX level of 31. This becomes the most recently computed value before packet 8 is sent, so that's what packet 8 contains. Thus, we see that synchronized calculation gets us only steps of 2 and 3, but unsynchronized calculation gets us steps as big as 4, because, as the updates are calculated longer and longer before they are used, they get more and more out of date until, eventually, the next unsynchronized update is computed very shortly before it is used (that's what happens just before packed 8 is sent), and the update suddenly "catches up" (but not completely) to the actual packet being sent.
That erratic behavior resulting from unsychronized computation rates and transmission rates is what, in computer animation, is called "janking." You can see it, believe me. With the whopping big steps I am seeing in the lower levels on my crummy
LED lights, it makes their poor low-level performance look even worse.
Now, for video animation, you need some help out of the hardware to synchronize alternating buffers. With a
DMX adapter, however, this discussion has me thinking I can synchronize by waiting on a
blocking write
call to transfer my latest complete buffer out to the
universe. I'm pretty sure my OpenDMX will do this, but I'll have to look at some other specs to know if it will work for other devices. Regardless, particularly with
LED instruments, I don't think I can just ignore it. (My wife discussed this with me and had the nifty idea that a
DMX receiver could inform the computer as to when a packet had gone out; I'll keep that in mind if all else fails. She's bright, my wife!)
Hope this set of graphs helped clarify some of the murkier parts of my question. As always, thanks for the CB support!