Stevens R. Miller
Well-Known Member
This will be a hard question to ask as I probably am not acquainted with the proper terminology. Let's see if I can make myself understood. Let me start with a comparison to something I (think I) do understand pertty well:
Video is created by painting a series of still images on a monitor screen at a rate high enough to fool the brain into thinking it is seeing continuous motion. Typically, the rate is constant, maybe 60Hz. When each image represents a scene as it was at a fixed point in time, and the images are each representative of the scene as it changes from one of those fixed intervals to another, we see smooth, natural motion on the screen.
So, if a video camera captures frames at a constant rate of 60 per second, and we play those back at 60 per second, we, for lack of a more sophisticated way of putting it, get what we want.
Video games face a problem not faced by video recorders: the scene might change wihle a frame is being drawn onto the monitor. That's because the frame isn't presented on the monitor all at once. Instead, it typically starts being drawn from left to right at the top-most row of pixels, then continues from left to right across the second row down, and so on, usually using up the entire frame interval to complete the process. This means that if a computer is generating the frames, the frame last generated by the computer may be replaced by a new frame, while the monitor is part-way through the process of drawing the frame. Lots of video games show this problem and, because of the way it looks, it is often called "tearing."
Here's a really good example:
To solve this, a game can be written to draw its frames into a buffer, then make that buffer available for use after the current frame is completely presented, and with the guarantee that it will not be changed after being made available. The game program alternates between two such buffers, building the next image in the buffer you don't see, while leaving untouched the buffer you are seeing at any given moment. The feature that makes this possible is called "Vsync," for "video synchronization." It requires support from the graphics display hardware, and cooperation from the game software. Without both, tearing is inevitable. This phenomenon, where two discrete sequences are unsynchronized and irregular visual effects appear when the sequences are somehow merged is more generally known as "janking," a time-based cousin to the space-based aliasing in computer graphics everyone knows as "jaggies."
I am sure you are wondering, if you have read this far, what this has to do with theatrical lighting.
Well, I have written some software to dim lights via DMX512. Until recently, all the lights I have worked with used halogen bulbs. These don't change brightness very fast. From "full" to "out" takes about a second. "Out" to "full" is a bit faster, but still not instantaneous. All other transitions from one level to another also take some time, owing to the simple fact of that being how halogen bulbs behave. But now I am dealing with LEDs, which change intensity virtually instantly. For crummy LEDs, that often have strikingly non-linear dimming curves, this creates a noticeable "stepping" at the low end of their brightness. A change from, say, 20 to 21 results in a sudden visible jump up in the lighting. (A lot of you warned me about this; you were right.) A halogen bulb, if it even acts this way, hides the jump by virtue of its own inherent smoothing of transitions from one level to another (a result of the fact that halogen bulbs just don't change intensity instantly).
Now, unless there is an LED light out there that will interpolate internally from one DMX eight-bit level to another (is there?), I'm going to be stuck with this. But, it gets worse if my software computes changes at one rate, and updates sent to the LED occur at another rate. Roughly, a full DMX512 universe gets updates 50 times per second. If my software computes changes 50 times per second, but does not synchronize changes with the transmission of DMX512 frames, I'm inevitably going to miss some changes (that is, two frames that should send different values to an instrument will actually send the same value, because my computed change comes a litte too late), and I'm inevitably going to catch up on the missed ones by skipping the ones I missed (that is, two frames that should send close values to an instrument will send values that are greater in difference than they would be if my changes and my DMX512 frames were synchronized). (NOTE: This topic often provokes the suggestion that the changes should just be computed twice as often as the frame rate, or some other multiple. Instead of going on at even more tedious length, I will just say this doesn't solve the problem.)
This not only means I am dealing with visibly discrete changes in my LED lights, it also means that they fade unevenly, with some pairs of frames not changing them at all, and others changing them more than I wanted, to catch up with the ones that didn't change (or vice versa). That's a form of janking. With halogen lights, this never made itself apparent to me, since they impose their own "smoothing" effect on changes. Halogens are actually janking too, but you never notice it. With LEDs, you do.
Now, I can probably write my program to synchronize with the section that sends the DMX512 frames. That's going to be a challenge, but I knew the job was dangerous when I took it. My question is this: How do professional systems deal with the problem of synchronizing computed fades with the transmission of DMX512 frames? Is there an equivalent in DMX512 programming to the Vsync used to solve this problem in video games? If so, does it have a name?
Thanks!
Video is created by painting a series of still images on a monitor screen at a rate high enough to fool the brain into thinking it is seeing continuous motion. Typically, the rate is constant, maybe 60Hz. When each image represents a scene as it was at a fixed point in time, and the images are each representative of the scene as it changes from one of those fixed intervals to another, we see smooth, natural motion on the screen.
So, if a video camera captures frames at a constant rate of 60 per second, and we play those back at 60 per second, we, for lack of a more sophisticated way of putting it, get what we want.
Video games face a problem not faced by video recorders: the scene might change wihle a frame is being drawn onto the monitor. That's because the frame isn't presented on the monitor all at once. Instead, it typically starts being drawn from left to right at the top-most row of pixels, then continues from left to right across the second row down, and so on, usually using up the entire frame interval to complete the process. This means that if a computer is generating the frames, the frame last generated by the computer may be replaced by a new frame, while the monitor is part-way through the process of drawing the frame. Lots of video games show this problem and, because of the way it looks, it is often called "tearing."
Here's a really good example:
To solve this, a game can be written to draw its frames into a buffer, then make that buffer available for use after the current frame is completely presented, and with the guarantee that it will not be changed after being made available. The game program alternates between two such buffers, building the next image in the buffer you don't see, while leaving untouched the buffer you are seeing at any given moment. The feature that makes this possible is called "Vsync," for "video synchronization." It requires support from the graphics display hardware, and cooperation from the game software. Without both, tearing is inevitable. This phenomenon, where two discrete sequences are unsynchronized and irregular visual effects appear when the sequences are somehow merged is more generally known as "janking," a time-based cousin to the space-based aliasing in computer graphics everyone knows as "jaggies."
I am sure you are wondering, if you have read this far, what this has to do with theatrical lighting.
Well, I have written some software to dim lights via DMX512. Until recently, all the lights I have worked with used halogen bulbs. These don't change brightness very fast. From "full" to "out" takes about a second. "Out" to "full" is a bit faster, but still not instantaneous. All other transitions from one level to another also take some time, owing to the simple fact of that being how halogen bulbs behave. But now I am dealing with LEDs, which change intensity virtually instantly. For crummy LEDs, that often have strikingly non-linear dimming curves, this creates a noticeable "stepping" at the low end of their brightness. A change from, say, 20 to 21 results in a sudden visible jump up in the lighting. (A lot of you warned me about this; you were right.) A halogen bulb, if it even acts this way, hides the jump by virtue of its own inherent smoothing of transitions from one level to another (a result of the fact that halogen bulbs just don't change intensity instantly).
Now, unless there is an LED light out there that will interpolate internally from one DMX eight-bit level to another (is there?), I'm going to be stuck with this. But, it gets worse if my software computes changes at one rate, and updates sent to the LED occur at another rate. Roughly, a full DMX512 universe gets updates 50 times per second. If my software computes changes 50 times per second, but does not synchronize changes with the transmission of DMX512 frames, I'm inevitably going to miss some changes (that is, two frames that should send different values to an instrument will actually send the same value, because my computed change comes a litte too late), and I'm inevitably going to catch up on the missed ones by skipping the ones I missed (that is, two frames that should send close values to an instrument will send values that are greater in difference than they would be if my changes and my DMX512 frames were synchronized). (NOTE: This topic often provokes the suggestion that the changes should just be computed twice as often as the frame rate, or some other multiple. Instead of going on at even more tedious length, I will just say this doesn't solve the problem.)
This not only means I am dealing with visibly discrete changes in my LED lights, it also means that they fade unevenly, with some pairs of frames not changing them at all, and others changing them more than I wanted, to catch up with the ones that didn't change (or vice versa). That's a form of janking. With halogen lights, this never made itself apparent to me, since they impose their own "smoothing" effect on changes. Halogens are actually janking too, but you never notice it. With LEDs, you do.
Now, I can probably write my program to synchronize with the section that sends the DMX512 frames. That's going to be a challenge, but I knew the job was dangerous when I took it. My question is this: How do professional systems deal with the problem of synchronizing computed fades with the transmission of DMX512 frames? Is there an equivalent in DMX512 programming to the Vsync used to solve this problem in video games? If so, does it have a name?
Thanks!