Sensor Rack Woes - DMX Input Error

StradivariusBone

Custom Title
Premium Member
Fight Leukemia
So we're seeing an intermittent input error from port A on our Sensor. This rack has had some issues that we've mostly resolved over the past few months. Initially we had a CEM with a bad zero cross detector on phase A. After sending it back and forth a few times we were not able to fix it (though ETC bench tested it and said it worked fine with them- which I do believe). For some reason it wouldn't work right in our rack, but would elsewhere. The real kicker is that the borrowed CEM worked fine. Fast forward to a lightning strike which trashed our arch controller. Troubleshooted that problem down to the transceiver IC's on the stations and on the DAS controller. Replaced and got it running again.

Now we are getting this intermittent error, only on Rack 1 (the troublesome one). I swapped CEM's and it initially didn't follow, leading me to reason that it might be a problem with the terminal contacts on the CEM or the backplane. Since we experienced trouble with a particular known-good CEM, but no trouble with a separate good CEM, perhaps the problem might be in the connections. Perhaps corrosion or debris on the terminals. Cleaned the CEM terminals with isopropyl alcohol and blew out the connector on the back plane per ETC tech support and the problem seemed to leave us. Until today.

Now the same CEM (still moved into rack 2) is exhibiting DMX port A input errors. With these errors comes a flickering to full of random dimmers, not favoring a particular phase that I can see. It also seems prevalent when a lot of dimmers are at mid-levels, but it might just be that it's more noticeable under those circumstances.

It comes and goes intermittently. As I am typing this I'm getting a "System OK" display.

Other troubleshooting included changing our controller. Changing the cable going from the controller to the wall. There are three input ports in the building, I haven't attempted using anything but the furthest at this point. That will be my next troubleshooting.

I also noticed that the same transceiver chips are on the CEM (SN7517) that are also on the interface PCB's with the arch controllers. I had a couple of spare new ones that I tried swapping into the CEM. Though I'm not sure that they are used in the DMX side of the interface, I figured it wouldn't make anything worse at this point. It still had errors when I powered back on, but they have since stopped. Not sure if that made a difference or not.

I do plan on calling tech support, but I figured I'd poll the booth for any help here as well. Frustration abounds.
 
Disconnecting other parts of the system is a common technique. Drop any network or other control inputs or outputs and see if the issue stays. Change to a known good DMX input on a short cable. Then swap parts around. I call it 'divide and conquer' :)

Intermittent is trouble! :wall:
 
This is why it's handy to have a cheap little DMX board, or a laptop with a dongle handy. Disconnect the whole system, plug the board/laptop into the offending rack, and see if the problem is present. Especially after a building is struck, you may have problems hiding somewhere else. Eliminate all. If the rack works, you know to hunt elsewhere.
Now, I spent many years in electronic service working the bench, and often various units would come it that tested ok. That doesn't mean it really is ok. Often, something may be marginal but within spec. Something else may be at the site which is also marginal but within spec. Put the two together and and they no-workey!
 
We did test with different inputs, I used our old Express I tried different dongles and different computers. All met with the same result while it was receiving errors.

I just attempted plugging in to another input (closer physically to the rack in the daisy chain of 3) and it has ceased. But it stopped acting up a bit ago shortly after I replaced the transceivers and cycled power.

But I don't know if that was the smoking gun since it still had errors after changing the chips, even though it corrected within 5 minutes of it?

We are using a nice short, known-good DMX cable with another laptop running lightfactory connected to the original input port. So far so good.

Intermittent is trouble! :wall:

Preach! :pray:
 
We are using a nice short, known-good DMX cable with another laptop running lightfactory connected to the original input port. So far so good.

I stand corrected. Errors and flicker again. Since it is not affecting both racks and did follow the CEM to Rack 2 troubleshooting narrows this down to a CEM-specific fault. This is the replacement CEM that worked when our original one failed multiple times with the phase detection fault.

I've got a tech rehearsal in a bit so I won't be able to call it in until later unfortunately. Timing is, as always, everything.
 
This is why it's handy to have a cheap little DMX board, or a laptop with a dongle handy. Disconnect the whole system, plug the board/laptop into the offending rack, and see if the problem is present. Especially after a building is struck, you may have problems hiding somewhere else. Eliminate all. If the rack works, you know to hunt elsewhere.
Now, I spent many years in electronic service working the bench, and often various units would come it that tested ok. That doesn't mean it really is ok. Often, something may be marginal but within spec. Something else may be at the site which is also marginal but within spec. Put the two together and and they no-workey!
Prior to my stroke, when I could still see, my Goddard MiniDMX'ter was among my best, and most trusted, friends when it came to tracking down DMX faults. When I purchased it, it was one of a kind but there are many similar devices on the market now at a range of price points. If you plan on continuing in this line of work / problem solving, I highly recommend investing in a similar product / device. Having a battery powered, trustworthy, source and reader of DMX512A is rarely a bad thing.
Toodleoo!
Ron Hebbard.
 
Prior to my stroke, when I could still see, my Goddard MiniDMX'ter was among my best, and most trusted, friends when it came to tracking down DMX faults. When I purchased it, it was one of a kind but there are many similar devices on the market now at a range of price points. If you plan on continuing in this line of work / problem solving, I highly recommend investing in a similar product / device. Having a battery powered, trustworthy, source and reader of DMX512A is rarely a bad thing.
Toodleoo!
Ron Hebbard.

I was just looking at that on some other threads. Honestly, at this point with how many issues we've had with this particular system I don't think I can fix it without seeing what data is actually being sent. I've thought about trying to get an oscilloscope to take apart what's actually going into the rack. Maybe call Ben Heck and see if he can do a show on it lol
 
So here's the saga today- I patched my laptop directly into the rack by disconnecting the DMX A port from the DAS and splicing in a terminal. So now port A on the rack is fed exclusively from the laptop via a length of about 30' of DMX. I ran into issues with this setup initially using two shorter cables plugged together. I'm wondering if the ENTTEC dongle didn't have the power to drive the signal (note- total length of the run was less than 60') and both cables and the 3-5 pin adapter all tested good.

The error remains and it's following the CEM from Rack 1 to Rack 2 and back again. I've eliminated the DAS and the cabling from this fault from what I can tell.
 
So here's the saga today- I patched my laptop directly into the rack by disconnecting the DMX A port from the DAS and splicing in a terminal. So now port A on the rack is fed exclusively from the laptop via a length of about 30' of DMX. I ran into issues with this setup initially using two shorter cables plugged together. I'm wondering if the ENTTEC dongle didn't have the power to drive the signal (note- total length of the run was less than 60') and both cables and the 3-5 pin adapter all tested good.

The error remains and it's following the CEM from Rack 1 to Rack 2 and back again. I've eliminated the DAS and the cabling from this fault from what I can tell.


Because the error persists without the DAS, does not mean the DAS is working, only that its not the cause of one particular problem. Its possible that two different faults cause the same problem, independently, and that is one tough nut to crack. The fact that the the problem goes away with one cem is encouraging. I always specify a spare cem, especially convenient with ETC that stores the whole config in every cem in the system, so if one fails, the replacement cem is automatically configured.
 
Seems to me, the problem has been narrowed to the CEM. The Enttec dongle should be good for extended distances. There are other chips on the CEM that can go wonk, particularly if you've had a lightning hit. Since this CEM is from ETC they should be able to get another one to you.
 
Here's an update, post tech help from ETC- they're thinking it might be one (probably now both) of the optocoupler IC's on the CEM. Initially they recommended replacing the transceiver chips (7517) which I had done already to no avail. Though the failure is concentrated on input A, I have noticed intermittent distress from B as well running from the DAS. I actually tried to use the B input directly (bypassing the DAS) to run the rack from my controller, but the same faults occurred. We tried swapping optocouplers to see if that would make a difference and no dice. I've ordered a few from Arrow since they were the quickest and cheapest option. We managed to survive our opening with a modicum of issues. The flickering is only noticeable when dimmers are 1-99%. It seems if it's 0 or 100 they are mostly left alone. With that, we were able to adjust our programming to minimize the "effect".

It's nice having two CEM's to do some basic troubleshooting. My part time gig church has a 24 dimmer Sensor with just one and it's super annoying to not be able to do basic testing. I've thought about swapping out with mine here at work, but the software is different. I'm not sure how that might affect it. It probably would be fine swapping the PAC CEM into the single rack at the church, but working the other way I would imagine two CEM's running different versions would cause a problem.
 
Since we experienced trouble with a particular known-good CEM, but no trouble with a separate good CEM, perhaps the problem might be in the connections. Perhaps corrosion or debris on the terminals. Cleaned the CEM terminals with isopropyl alcohol and blew out the connector on the back plane per ETC tech support and the problem seemed to leave us. Until today.

You are in a location where I would expect severe oxidation on contacts. Alcohol is a mediocre solvent for removing oxidation, and it does nothing to prevent a recurrence. the fact that the problem improved for some time leads me to believe you found the problem but didn't cure it. Also supporting my theory is that pulling things apart causes the problem to hide for a bit and come back. That screams "mechanical," not electronic. Clean the contacts again with DeoxIT D5 and give them some time to dry. Then treat them with DeoxIT Gold preservative. The stuff really works.
 
Last edited:
I've got a can of radio shack brand electronics cleaner handy I can try. Probably not as good long term, but I'll have to order the DeoxIT stuff online. Oxidation/debris was my first thought too (and confirmed with ETC) since it went away immediately after swapping racks. The cotton swabs produced a good deal of greyish matter after wiping down with alcohol, which lends itself to the oxidation theory.

This is after all a CEM that has probably spent most of its life in the cozy wilds of Wisconsin. It's wasn't battle-hardened against humidity, salt spray, random roof leaks and lightning like it's new brother CEM. Not to mention the occasional Florida Man interaction. :eek: :confused:

Post Contact Cleaner Edit-

After powering back on I got a flurry of input errors from both ports, lots of dimmer activity. Then it settled down and ran normally. Hmmmmmm.


Post 1st Act Edit-

Still DMX erroring.
 
Last edited:
I think I may have found something. I sent the troublesome CEM back home at ETC's request who found nothing wrong with it on their side of the world. They did a bunch of PM stuff and replaced the transceiver and optosplitter IC's on both DMX inputs. I just put this thing back in and immediately saw the no input from port B, which was to be expected as the DAS was off, but then it started up again with that Port B Input error crap. Which makes no sense since there's nothing turned on in port B land.

Sorry to create multiple threads here- my DAS/arch controller is also on the suspect list since it tried to kill my building last month. So I had the DAS shutdown via removing both fuses, but not at the breaker. Long story short, it froze up and commanded all the dimmers on at full while the theatre was dark for 20 days. The CEM was fine on the bench in Wisconsin, so the smoking gun is in this dimmer room somewhere.

So as I'm cursing, I'm wondering how this thing gets it's power. I open the DAS's right side to check out the power supply, thinking maybe a cap is going and it's not sending clean DC to the stations or whatever. The power supply is mounted to a heatsink/RF shield and there are a couple of things grounded (I did turn off the breaker at this point since there's still mains power coming into the box even with the fuses broken). I notice a few solderless terminals mounted to the shield for ground. One of them is the cable shielding for the DMX runs. That one is not terminated properly and is loose.

It's a little early to celebrate, but after re-terminating the shield wire to ground, I put everything back together and power on and so far no errors. My thinking is that shield wire is grounded at each station, but if left open inside the RF shield of the transformer every little bump of building main power (keeping in mind the DAS is fed from a panel other than the rack) would be enough to induce some sort of electromagnetic field in the shield and send garbage down all the DMX lines. We just had a ton of work done with our HVAC (new chiller, etc) and they were screwing around in the electrical closet that feeds the DAS, so I don't think this is a stretch to assume that it might be a sort of ground loop issue.

I guess testing this theory would involve removing the ground and trying to replicate the errors. This DAS has been causing hell in here for long before my time. It is likely not be an original install mistake as they replaced a lot of the DMX cabling maybe 10-12 years ago. I'm just looking to see if my logic is sound with those better traveled on the Booth.
 
Last edited:
Could be!
Although we tend to think of DMX as a comparison between pins 2 and 3, and that pin 1 does not matter, this just isn't the case in the real world! If the voltage levels on pin 2 or 3 exceed the supply rails feeding the decoder chip in reference to the circuit ground, all bets are off. It's basically like an audio circuit clipping. Since the supply rails are usually (circuit) ground and +5, any noise that takes pin 2 or 3 below circuit ground potential or above +5 volts of the circuit ground, will produce a random 1 or 0 on the decoder output irrelevant to the comparison of pins 2 and 3. Not hard to imagine that happening if pin 1 isn't tied to the circuit ground. (Not to be confused with the chassis or frame ground)
proxy.php
 
Back to the drawing board. Input errors on both ports. Lights flickering worse than before. Took port B completely out of the CEM and those went away. Going to rewire the port A input next week with new data cable. It's completely isolated from the DAS and off. I have no idea what could still be causing this failure. Completely fed up at this point and glad it's Friday...:wall:
 
So this is still only happening in the rack with the suspect CEM? I would see if ETC can send you another loaner even though they have looked at your CEM already.

From the tests you have done I would expect both racks would be in error if there was a DMX signal problem.
 
Back to the drawing board. Input errors on both ports. Lights flickering worse than before. Took port B completely out of the CEM and those went away. Going to rewire the port A input next week with new data cable. It's completely isolated from the DAS and off. I have no idea what could still be causing this failure. Completely fed up at this point and glad it's Friday...:wall:
TLDR; Pardon me while I bore you with one more of my several Colortran ENR tales, maybe even two more.
One of the problems we experienced with our ENR's was an irritating pulsing / flickering which Colortran had a fellow from California fly up to Hamilton, Ontario and spend an overnight session with Chris Mentis, who at the time was performing Colortran's Eastern Canadian commissioning and warranty servicing, and I. First thing the gentleman from California found was the control trays were trying to over compensate for changes in line voltage treating every momentary spike, no matter how brief, as if the voltage had gone really high and needed to be dimmed down to compensate. The gentleman from California had fortunately brought one of Colortrn's latest EPROMs with him to upgrade us to their very latest corrections. Having upgraded the first processor to this latest chip the three of us sat back to chat while we waited to see if the problems (we had several) were now solved.
Just when we were beginning to feel confident, suddenly things got silly again. Not seeing any logical causes, the fellow from California opted to drive stakes through our processors' hearts and start over from a clean slate by "fat fingering", quoting him, all new data into our racks from his personal laptop. As he was now planning on leaving no stones un-turned, he decided to confirm the EPROM version in our second (slave) processor where he was VERY surprised to see it had left the factory with a very old EPROM several generations out of date which could no longer successfully communicate with the new EPROM he'd just installed. Fortunately he had another processor in his bag of tricks and, between the two new processors, the bulk of our problems were solved. That theatre opened in the fall of 1991 and when I moved on, between Christmas and New Years 1992, we had something like 17 documented service calls with Chris Mentis and we still had lingering problems. The ENR's handled all of the house and work lights and the system would lock up in whatever state it felt like whenever it felt like and with zero warning. It's one thing to be in 'House At Full' and unable to go to half and out and start the show but it's no fun to hit interval and not be able turn up the houselights. For a while, the following became Standard Operating Practice: At five minutes to interval the SM would warn me and I'd toggle the back lit panel on my booth remote between 'house at full' and 'house out' to see if the indicators were going to respond. If the indicators responded we were routinely ready to proceed as normal. If the indicators didn't respond, the SM would dispatch the the flyman to the dimmer room and the house manager to her 'panic station'. When the patrons were applauding the end of act one and trying to race to the bar, the SM would call for 'House At Full' and the ENR's would either respond or not. If no house lights, the flyman would hit reset. If still no houselights, and as the applause was waning, the SM would instruct the House Manager to hit her 'Panic' switch and the house lights would either snap to a blazing setting rarely ever used or we'd still be sitting there in black with the curtain wash and the SM explaining the situation on the `God Mic`. (Good old reliable Strand AMX-192 CD80's) We lived with this kind of un-reliable, problematic, non-reliability and it eventually got to having several Ianiro 1K Polaris fresnels devoted to back-up emergency house lights. If you're looking for someone to recommend Colortran ENR`s you needn`t bother asking me lest I bore you with more of my ENR tales of woe.
Toodleoo!
Ron Hebbard.
 
So now both CEM's are exhibiting oddities. Rack 1 is showing intermittent port A failures (which is where my main controller comes in) and Rack 2 is showing port B failures. Rack 1 has flickering when the port A errors come on, but rack 2 does not. Flickering seems to increase when more fixtures are turned on, but was consolidated mostly to dimmers 88, 89, 90, & 96 when running house and work lights (89 is a work light, but the others are not used in that look).

I shut down the DAS completely so there is no data from port B (which makes it odd that rack 2 is receiving input errors from that port). I've tried bypassing the DAS by straight wiring the output cable from the controller to the input cable to the rack. Still errors. I took the whole rack apart yesterday and looked closely at the terminals. Rack 2 had a lot of blue corrosion on the slot so I dismantled it and cleaned it up shiny. Rack 1 was dirty, but not corroded. Cleaned both slots. Nothing.

I'm at a loss. Going to call ETC today, but can't do much in the way of troubleshooting until a group that's in here rehearsing is finished. I feel like the only thing that's left is to rewire everything going into the backplane.
 
A DMX port error message can mean the port isn't receiving DMX. If you've shut down the source of the DMX for the port then the error message would be expected.
 

Users who are viewing this thread

Back