I wanna learn networking for lighting

I don't have a lot of real worild experience but I'd like to learn networking for lighting. Is there a book or class, like the Ethernet for Entertainment class they had at LDI, out there? I've read John Huntington's Show Networks and Control Networks which is great but looking for something more practical and specific towards Art-net and sACN.
 
I took a CCNA course back in high school.... back when that was the cool thing to do. Best thing I ever did. If you in school now and reading this, find that class if you can and take it.

With that though, there is very little difference between most lighting networks and home networking. While John's book is great, I feel he dives a bit too deep into the water. Thats a great thing, but most of us will never deal with VLAN's in the lighting world.

So, what are most people looking for? Do you want to know how to setup a wifi network or just a wired network with nodes? Is anyone looking to go over 300' runs and care about fiber? I might be able to do something in the next few weeks that could get you going.
 
One of the great things about using lighting networking is there isn't anything much to know that's different from any other modern networking. This is particularly true for small rigs, as with all networks the larger it is, the more complex it is. One key note is that ACN and ArtNet (depending on version) are different in some of those more complex ways.

Reading the manual for any hardware and software would be the first step. All together the networking sections might add up to 10 pages. After most small setups are made up of 3 things; console, router/switch/hub/whatever you want to call it, DMX gateway/node, maybe a networked dimmer rack or WiFi access point. If you need more than that it's not a small network.

Hmmm, topics to read up on:
  • How to set an IP, sub net mask and default gateway. And probably a WiFi channel.
  • Know what a DHCP is and whether your network has one, or two.
  • Identify common damage to cables.
  • Have and practice with a WiFi analyzer program.
  • Recognize some of the special IP address ranges and why they are special (192... 10... 169...)
  • Why NOT to connect it to the internet.
I'm sure others will come along with improvements on my thoughts.
 
It's a little unrelated but if you go on Audinate's website they have some great online tutorials and training for their Dante audio networks. The purpose of the networks is different but the concept of getting mission-critical, low-latency packet transfers across your network is very similar to what you'd see for lighting. They also do a pretty good job of breaking down concepts like QoS, DiffServ, TCP/UDP, subnets, and Layer 2 devices versus Layer 3 devices.

Usually these things don't matter much unless you're connected to a larger network though. If the lighting system is the only source of traffic on the network it can be very forgiving which settings you use. If you're hooked up to a wider network with the audio or the building's guest WiFi network then you've got a few more chainsaws to juggle and the subnetting, routing, QoS, and network topology really start to matter.

Interesting side note:

Wall Street is actually starting to use the TSN (Time-Sensitive Networking, formerly AVB (Audio-Video Bridging)), the other big AV network protocol for security purposes. They've discovered that if you can get mission-critical low latency packet transfers across a network, you can determine if the data is reaching the other end of the network later than it should be. In this case, "later" means being able to detect down to the microsecond how delayed your packets are. This indicates you may have a man-in-the-middle attack going on and someone is intercepting your data before it reaches its destination.
 
I got some crappy book on show networking and show control, yeah I think it was this, https://www.amazon.com/dp/0615655904/?tag=controlbooth-20 , I'm pretty sure thats what footer was talking about. VLANS could actually be quite important, in festival settings with bands sharing a snake, and keeping video and lighting seperate/and/or connecting them in the way you want.

The book was OK for basic info, but it was more like a book for a 101 tech theater class, and much less informative than a entry level highschool networking class.
What bothered me was the section on a proprietary Layer 3 option on a specific Cisco brand(mostly layer 2) Switch, rather than actually discussing Layer 3 networking in general. While I know this is not really that applicable in most lighting networks, focusing on one specific model, the standard across the industry really does not serve the reader very well, and for the advanced user there may be need to communicate between networks, like lighting and video. In a Installed venue, layer 3 could be a very real consideration, but the book basically did not address it, besides the limited use on the cisco stacking platform.

If you did not understand the last sentence, then you will probably learn something from the book, but save the $50 and just do some googling, or spend more money take a community college class, and you will be WAY ahead of this book.

@Rob , What bugs me in the industry, is the lack of focus on redundancy, especially in FOH snake lines. I see lots of BIG tours simply using one network cable from the console to dimmers. I want to commend Pathport for integrating redundancy in your products. I have not used them personally, but I have seen demos, and I assume it is some application of link aggregation or very good and fast RSTP. If you saw the write up in PLSN about the last superbowl on the lighting network, there were not redundant lines between lighting control racks, but there were backup Pathway switches. I thought this was quite silly, because the backups were completely offline. And really a switch is probably the least vulnerable equipment in the system to failure, while a cable is probably the most likely failure point. Why not run 2 cables/fibers. And why not connect the 2 switches?

I know it was PRG, and they do big crap, but it seems to me that a lot of people in the lighting industry could use some more network training.

@Pie4Weebl if you want to borrow that book, ship it to you.
 
I got some crappy book on show networking and show control, yeah I think it was this, https://www.amazon.com/dp/0615655904/?tag=controlbooth-20 , I'm pretty sure thats what footer was talking about. VLANS could actually be quite important, in festival settings with bands sharing a snake, and keeping video and lighting seperate/and/or connecting them in the way you want.

The book was OK for basic info, but it was more like a book for a 101 tech theater class, and much less informative than a entry level highschool networking class.
What bothered me was the section on a proprietary Layer 3 option on a specific Cisco brand(mostly layer 2) Switch, rather than actually discussing Layer 3 networking in general. While I know this is not really that applicable in most lighting networks, focusing on one specific model, the standard across the industry really does not serve the reader very well, and for the advanced user there may be need to communicate between networks, like lighting and video. In a Installed venue, layer 3 could be a very real consideration, but the book basically did not address it, besides the limited use on the cisco stacking platform.

If you did not understand the last sentence, then you will probably learn something from the book, but save the $50 and just do some googling, or spend more money take a community college class, and you will be WAY ahead of this book.

@Rob , What bugs me in the industry, is the lack of focus on redundancy, especially in FOH snake lines. I see lots of BIG tours simply using one network cable from the console to dimmers. I want to commend Pathport for integrating redundancy in your products. I have not used them personally, but I have seen demos, and I assume it is some application of link aggregation or very good and fast RSTP. If you saw the write up in PLSN about the last superbowl on the lighting network, there were not redundant lines between lighting control racks, but there were backup Pathway switches. I thought this was quite silly, because the backups were completely offline. And really a switch is probably the least vulnerable equipment in the system to failure, while a cable is probably the most likely failure point. Why not run 2 cables/fibers. And why not connect the 2 switches?

I know it was PRG, and they do big crap, but it seems to me that a lot of people in the lighting industry could use some more network training.

@Pie4Weebl if you want to borrow that book, ship it to you.

VLAN's could work in that situation but we still live in a world where a good portion of FOH snakes are run over AES or some other protocol that is not routable. I don't see that changing in the near future. VLAN's can be usefull in some situations but anything that either has sACN or Dante on it should be on its own physical network without any extra packet headers.

I somewhat remember the part in the book where he talks about switches. I think he spends a lot of time talking about how a Cisco 2950 (and now the 2960) works and how to work with it. He does this for good reason. It is THE switch. By far, its is the most produced piece of pro level networking gear ever produced. It really is something that if you are going to get into network design you need to know.

Redundancy is another thing that is hard for ethernet networks to deal with. Simply put, the system was not designed to do it. It can be done but it takes real knowledge to do it... and the cheap stuff doesn't do it at all. I use Brocade switches at work for our IT infrastructure and they do it really well. Its very easy to trunk ports together and have them fail gracefully. Those are 5,000 dollar switches. It takes a lot of processing power to do this right.... and proper network design.

As far as the layer 3 stuff goes, I don't like to see any of that in a show network. You should not be passing data through a router in our business. Way too much latency and really no need.

I will fully agree though that we are really lacking in network knowledge on the user side of our business.
 
@Rob , What bugs me in the industry, is the lack of focus on redundancy, especially in FOH snake lines. I see lots of BIG tours simply using one network cable from the console to dimmers. I want to commend Pathport for integrating redundancy in your products. I have not used them personally, but I have seen demos, and I assume it is some application of link aggregation or very good and fast RSTP. If you saw the write up in PLSN about the last superbowl on the lighting network, there were not redundant lines between lighting control racks, but there were backup Pathway switches. I thought this was quite silly, because the backups were completely offline. And really a switch is probably the least vulnerable equipment in the system to failure, while a cable is probably the most likely failure point. Why not run 2 cables/fibers. And why not connect the 2 switches?

I know it was PRG, and they do big crap, but it seems to me that a lot of people in the lighting industry could use some more network training.

@Pie4Weebl if you want to borrow that book, ship it to you.

Hi Mike,

I work for PRG and I was the System Tech on the Superbowl so I can give you some more insight on how it was laid out. Last year we actually did have full redundant networks. We ran multiple fiber lines to each location where there was a second switch that was plugged in and powered up. Switches and nodes at each dimmer pit were also on UPS power so that data pass through could be maintained should a particular location lose power. Almost every location had a spare node as well that was plugged into the backup system. We did not run a ring system though as it wasn't practical due to cable paths and the amount of fiber required. Also this particular show is very constrained time wise. We only have 6 minutes to set up, followed by a 12 minute show, and a 6 minute load out so there is a very limited window, if any at all, to trouble shoot problems. As a result a network that is simple is more robust, reliable, and easier to troubleshoot then a more advanced setup. During the show I was at FOH monitoring the system health and we had a person at each dimmer location with a radio. The idea being we could react as quickly as possible should a problem present itself.

Generally no matter how much redundancy we build into a system there will still always be failure choke points. Obviously we try to minimize them as much as possible. We try and make sure we know what and where they are, and at the very least, have a plan in place to deal with them should there be a failure. Case in point at FOH I had 4 active switches and two spares. I would have liked to have had a back up for each one but we just didn't have enough available. So I had two spares powered up and ready to go with the plan that should one of the main switches fail I could quickly and easily re-plug cables. As far as cables are concerned I am a big proponent of the "One is none, Two is One." philosophy. I always run two fibers or two XLRs on my main runs. It has also been my experience that no one particular piece of equipment is less failure prone then any other. This means we stay paranoid and do multiple tests and checks on the system to try and stay ahead of the gig.

Below is the system diagram that I drew up. This is the final version before we loaded in. So not all the spares are on it and we made some changes onsite as we went along. Take a look and if you have questions I will do my best to answer them for you.

Chris Conti
 

Attachments

  • SB Ver 4.pdf
    2.8 MB · Views: 771
I will fully agree though that we are really lacking in network knowledge on the user side of our business.

In a lot of ways, this is by design. Dante is really built as a plug & play solution. Really gives new meaning to "knowing enough to be dangerous" If you know nothing and plug your equipment together daisy-chained or without redundancy like the diagram the manufacturer gives you, it will probably all work even on layer 2 $50 switches. If you know enough to start messing with latency settings, trying to do VLAN's, using redundancy, or combining networks together you can brew up a situation where something works really well right up until total catastrophic failure where the only way to troubleshoot is to dismantle every piece of the system and bring it back online one device at a time until you find the break point. I get nervous when I see those guys who just bought a console and a couple stage boxes tell me they need to learn all about networking for their next big show coming up. If they touch nothing, they'll probably have better results than if they get tweaky.
 
If you know enough to start messing with latency settings, trying to do VLAN's, using redundancy, or combining networks together you can brew up a situation where something works really well right up until total catastrophic failure
Agreed. Network Management is literally a thing you go to school for. It can get VERY complicated. That is why we are so successful making and selling a switch designed specifically for this market. We have a UI right on the front where you can dial in things like: "QoS = Dante Strict"; "VLAN = 5"; "Convert ArtNet to sACN = On". You won't find any of that on a layer 2 $50 switch.
 
With that though, there is very little difference between most lighting networks and home networking. While John's book is great, I feel he dives a bit too deep into the water. Thats a great thing, but most of us will never deal with VLAN's in the lighting world.

And this is the heart of the matter. A good understanding of how computer networks work will take you to about 75% of your goal. Putting aside device adapters and specific entertainment languages, the bigger task is getting all the nodes to talk to each other. (and STAY talking!) A solid understanding of IP and Subnet masks, the role of a DHCP device (usually a router), the difference between DHCP and Static IP, The types of copper and optic cabling, ipv4 vs ipv6, and other basic computer network concepts will bring you very close to your goal. The best part is, there is a flood of this information available for free on the net. The downside is you may have to take a class in order to pull all these concepts together.
It's funny, I am a lousy teacher on the subject. I was there in the late 70's when networking first became a useful concept. When I go to explain something, I often forget that unless you understand some of the history behind networking, it is hard to understand why things evolved they way they did. We tend to forget why the NOS layer sits between the BIOS and the OS as booting across a network is long a thing of the past!
 
Seriously, ditto that, I would love a like 10 page book of "so you have a simple lighting network, here is what you need to know"

We have some great manuals, system diagrams, and training videos about our S400 Power and Data Distribution System as well as our Fiber Optic Systems that you can download and look at. We have included a lot of general networking information and best practices in them so please feel free to check them out. Link is below.

http://www.prg.com/technology/products/power-distribution/series-400-power-data-distribution-system

Additionally if you are in the NYC area I run S400 networking classes al the time.
 
Hi Mike,

I work for PRG and I was the System Tech on the Superbowl so I can give you some more insight on how it was laid out. Last year we actually did have full redundant networks. We ran multiple fiber lines to each location where there was a second switch that was plugged in and powered up. Switches and nodes at each dimmer pit were also on UPS power so that data pass through could be maintained should a particular location lose power. Almost every location had a spare node as well that was plugged into the backup system. We did not run a ring system though as it wasn't practical due to cable paths and the amount of fiber required. Also this particular show is very constrained time wise. We only have 6 minutes to set up, followed by a 12 minute show, and a 6 minute load out so there is a very limited window, if any at all, to trouble shoot problems. As a result a network that is simple is more robust, reliable, and easier to troubleshoot then a more advanced setup. During the show I was at FOH monitoring the system health and we had a person at each dimmer location with a radio. The idea being we could react as quickly as possible should a problem present itself.

Generally no matter how much redundancy we build into a system there will still always be failure choke points. Obviously we try to minimize them as much as possible. We try and make sure we know what and where they are, and at the very least, have a plan in place to deal with them should there be a failure. Case in point at FOH I had 4 active switches and two spares. I would have liked to have had a back up for each one but we just didn't have enough available. So I had two spares powered up and ready to go with the plan that should one of the main switches fail I could quickly and easily re-plug cables. As far as cables are concerned I am a big proponent of the "One is none, Two is One." philosophy. I always run two fibers or two XLRs on my main runs. It has also been my experience that no one particular piece of equipment is less failure prone then any other. This means we stay paranoid and do multiple tests and checks on the system to try and stay ahead of the gig.

Below is the system diagram that I drew up. This is the final version before we loaded in. So not all the spares are on it and we made some changes onsite as we went along. Take a look and if you have questions I will do my best to answer them for you.

Chris Conti

Chris, that is a well organized network. My question is why the 2 fiber lines are separate, going to separate switches, when they could be aggregated into a redundant link. I think though, you already answered the question, when you mentioned that a simpler signal flow would be much faster to trouble shoot. If a node lost signal, it would be pretty quick to just switch over one cat5/6 line to the other switch/fiber run, thereby bypassing a whole group of equipment, because there is not any time to figure out more specifically the problem occurred.

I guess this is the rule of diminishing returns, no matter how huge and complicated you make things, there will always be single points of failure. Sometimes I have trouble keeping perspective on when a lot extra effort(and money) yields little return in function or quality.

Thanks for sharing the diagram, and the extra info; its very informative.

@Footer, Good points, but I was thinking lighting and you were thinking audio. Certainly I would rather Video/Audio/lighting/building infrastructure to exist on separate physical networks in most cases. Any yes with entertainment we would seldom get into layer 3, My thought were more to use Vlans in a festival situation, or a separate VLAN on a theaters lighting network so a tour can keep their floor package or video controls separate from the house light rig.
For example,
VLAN 1- festival light rig controlled by artnet
Vlan 2-Headliners MA NET, for their floor package, and console/tracking backup console
Vlan 3 support TitanNet, for their floor package, and console/tracking backup console
...also running several cat lines as a trunk for redundancy, seeing as there will probably be a bunch of crowd surfers stomping all over them.

This would be nice if the bands did not want to pull their own snake through the mud for a one off. If you start trying to be too cute with the setup, it's easy to get into trouble, and people become hesitant to trust a network they don't know about, and rightfully so. Bands in this setup would always have the option of running their own snake, and coming off of their desk/switch for just the festival rig. Personally If i were the headliner I would use as much of my gear as possible. If something goes wrong, and I say "well the festival cat5 crapped out" the band is going to ask why I did not use the snake that they PAID FOR and is still sitting in the truck? Now for a daytime slot at 3pm on a sunny day, I'd probably be happy to use the provided snake, since most of the gear would not be coming off the truck to begin with.

A few years ago at Rock in Rio(the one in vegas) I heard many horror stories about the terrible lighting network, I don't know what the major issues were, but apparently they were experiencing up to 5 seconds of latency, and this is with time coded shows. So the fact that most manufactures just recommend that you use an unmanaged switch, plug every thing in and leave it alone, for sure has good reason.
 
Certainly I would rather Video/Audio/lighting/building infrastructure to exist on separate physical networks in most cases. Any yes with entertainment we would seldom get into layer 3, My thought were more to use Vlans in a festival situation, or a separate VLAN on a theaters lighting network so a tour can keep their floor package or video controls separate from the house light rig.
For example,
VLAN 1- festival light rig controlled by artnet
Vlan 2-Headliners MA NET, for their floor package, and console/tracking backup console
Vlan 3 support TitanNet, for their floor package, and console/tracking backup console

Unless the tour is interfacing with house fixtures, they will probably not want to touch the house network if it can be avoided. In a roadhouse scenario, SM fiber, MM fiber, and STP CAT6 connectivity between places is much more important than having a backbone network. Festivals may be a little different where you have guest consoles on a shared system but usually simplicity and ease of troubleshooting trumps elegance of configuration.

VLAN's are great for very specific purposes, but only when supported by someone who knows exactly what they're doing. A lot of people look at a 52-port network switch and assume that all ports are on the same system. They don't know that some ports are reserved for trunks, others are access ports for various VLAN's, some ports may be turned off, and that an occasional port may be setup for mirroring other ports for packet sniffer troubleshooting. They look at a switch and assume they can plug into any port on that switch and it will work. In potentially uncontrolled environments, it's more reliable to have 2-3 switches and label each with their own purpose than to trust people will jack into the correct ports on a large, single switch with tiny numbering and a spaghetti of cables to sift through.

The other issue at play is that it drives the cost of your switches up. If you keep one protocol per switch, you can optimize the QoS for that protocol without losing your ability to trunk. This is something you can do on a lower cost L2 managed switch by setting the global QoS parameters. If you dump 3 VLAN's with their own protocols onto the same switch and each has their own requirements for QoS, they'll probably behave within that switch but one protocol will stomp on the other 2 when you send packets across your trunk. Setting QoS per port or per VLAN is usually an L3 feature and will put you into the thousands of dollars per switch range. Else, you have to do 3 trunks, 1 per VLAN.

There's something to be said too for keeping network configurations simple. If you're on the road and your switch dies on a simple, non-converged network, you can probably get away with whatever you can find in the local venue's basement or at Best Buy. If you get crafty and custom and your switch gets hit by lightning or drenched, it's much harder to source a $3500 replacement switch out on the road that seamlessly integrates into your multiple VLAN's without sparking up unanticipated issues.

...also running several cat lines as a trunk for redundancy, seeing as there will probably be a bunch of crowd surfers stomping all over them.
Yeah, don't put your network lines there. Sudden drastic changes in the network topology can have consequences and aggregated trunk lines aren't a good way to handle stray boot heels. If you can't reasonably secure your cables between locations, it's not worth putting thousands of dollars into switches because you have bigger problems. I haven't experimented with spanning tree and aggregated trunk lines specifically, but remember that the way spanning tree works to prevent network loops is that when it detects a change in topology, it shuts off data transmission on each port and pings them one at a time to see what changed. In a streaming lighting or audio application, plugging something in can cause noticeable blips across your network. Cisco's Rapid Spanning Tree feature and other manufacturers' versions mitigate this, but under certain conditions can still cause issues.

A few years ago at Rock in Rio(the one in vegas) I heard many horror stories about the terrible lighting network, I don't know what the major issues were, but apparently they were experiencing up to 5 seconds of latency, and this is with time coded shows. So the fact that most manufactures just recommend that you use an unmanaged switch, plug every thing in and leave it alone, for sure has good reason.

I'd be interested to hear more about this if anyone knows anything. Horror stories are the best way to learn what not to do. I know a fellow whose coworker in a moment's inattention killed the entire house network at a casino on the Vegas strip. The guy accidentally connected the Cobranet network to the house switches. Because they used L2 unmanaged switches for Cobranet, the multicast packets became broadcast and created a broadcast storm on the house network, sending streaming audio traffic out to all of the ports on the casino's network. In one fell swoop they knocked out the point-of-sale, security, gaming, and hotel WiFi systems -- another reason why I tend to be a fan of keeping a set of switches for each protocol at play and avoid VLAN's. You can still kill a segment of a network or a single protocol but with standalone, independent networks it's much hard to take down the primary, secondary redundant, and control networks all in one fell swoop.
 
Where are these switches that keep dying on people? I have seen 2950 switches still in service after 13 years of 24/7/365 service. A switch dying on you is the least of your worries. You will lose a link or power supply first. Most good switches have redundant hot swap-able power supplies for this.

The big thing you need to know for most stuff in our business is establishing a connection to a switch and IP addressing. Its rare to have a DHCP server in most lighting and audio networks so you will need to know how to assign address that allow everything to talk. While you can have a DHCP server in the form of a cheap home router you will most likely want all your devices to have static IP's.

In a lot of ways, this is by design. Dante is really built as a plug & play solution. Really gives new meaning to "knowing enough to be dangerous" If you know nothing and plug your equipment together daisy-chained or without redundancy like the diagram the manufacturer gives you, it will probably all work even on layer 2 $50 switches.

Wait, dante allows you to daisy chain? That second link on there I always thought and have used for a redundant link. Not the case? I would think this would cause some massive latency issues.
 
Where are these switches that keep dying on people? I have seen 2950 switches still in service after 13 years of 24/7/365 service. A switch dying on you is the least of your worries. You will lose a link or power supply first. Most good switches have redundant hot swap-able power supplies for this.

Usually someone just knocks the power cable out of the wall, but I've seen power supplies go kaput, ports die, and switches go flaky where they work for a while but need to be rebooted frequently because something gaks them up. Often it's the $50 variety switches that do this. Switch redundancy almost isn't as important as link or power redundancy, but so many people go and buy a bunch of redundant switches and then hook them up to the same power connections as their primary network. All you're protecting are your links.

Wait, dante allows you to daisy chain? That second link on there I always thought and have used for a redundant link. Not the case? I would think this would cause some massive latency issues.

Yup, you can daisy chain. Most people do from what I tend to encounter. I see a lot more people daisy-chaining or just using a star topology on the primary instead of going full redundant. This is actually a bit of an issue because most Dante devices ship factory default in Switched mode instead of Redundant. I had massive Dante install at a 3500-seat theater go down because one of the Lake LM-44's went AWOL and needed be nuked and have its firmware reloaded. When that happened, it went back to Switched mode and crossed the Primary and Secondary networks together. If you haven't see this happen before, it's really hard to track down because in Dante Controller devices will come online and fall back offline sporadically seemingly without rhyme or reason. Been a while since I've been in a Lake but I seem to recall this wasn't even a setting you could hit from the facepanel. Only way to change this in a troubled network is to disconnect it from everything else and connect to it and only it to configure it for redundant mode.

You do want to minimize switch hops to keep your latency down. The broadcast desks tend to have only 1-2 hops from stage boxes before they leave the studio while the desks that do recording or mix the local system tend to have more between hitting DSP's and getting ported all over the building for backstage monitoring and such.

The latency is really only an issue if you have a very poorly designed network. Goes something like this:

1 switch hop (very small network): 0.15ms
3 switch hops (small network): 0.25ms
5 switch hops (medium network): 0.5ms
10 switch hops (large network): 1ms
+10 switch hops (very large network): 5ms

The latency from Dante on a 10-hop system is negligible compared to the latency you get going through a DSP or because of the FIR filter processing time before hitting your line arrays. Kind of thing that can more noticeable on in-ears than in program audio, but again only if someone dropped the ball laying out the network.
 
In the early days of Pathport, we insisted on everything having static IPs. To some extent, it made people feel more in control; they knew where things were and what to call them. But in larger networks, the headache of dealing with these tables manually is a job better done by a computer. More and more we're suggesting people use DHCP because you never end up with an IP conflict which is quite possible as systems age and gear is swapped out. Moreover, some new gear, like some media servers I know, will only work on DHCP. The fact is, you don't need to know everything's IP address - as long as they have a unique one on the correct subnet with a good lease, things will find each other.
While you can have a DHCP server in the form of a cheap home router
.
THIS will become your weakest point if you rely on DHCP. They're light, fall over a lot and have crappy DC connectors. They MUST be hot all the time. We believe you should have the DHCP server core to the network, not some appendage. That is why our switches have the server(s) (one per VLAN) build right into them.
 
In the early days of Pathport, we insisted on everything having static IPs. To some extent, it made people feel more in control; they knew where things were and what to call them. But in larger networks, the headache of dealing with these tables manually is a job better done by a computer. More and more we're suggesting people use DHCP because you never end up with an IP conflict which is quite possible as systems age and gear is swapped out. Moreover, some new gear, like some media servers I know, will only work on DHCP. The fact is, you don't need to know everything's IP address - as long as they have a unique one on the correct subnet with a good lease, things will find each other.
.
THIS will become your weakest point if you rely on DHCP. They're light, fall over a lot and have crappy DC connectors. They MUST be hot all the time. We believe you should have the DHCP server core to the network, not some appendage. That is why our switches have the server(s) (one per VLAN) build right into them.
In the business world, it is handy to have some things with static IP addresses. It's pretty easy to set up your DHCP to only assign addresses in a given range, thus reserving a block for static IP addresses. For example, we have several routers set up for wireless access points (DHCP and WAN turned off) and those routers, along with servers, copiers, and printers all have static IP addresses so I can manage them. Anything new that comes on line is then given a dynamic via DHCP.
 

Users who are viewing this thread

Back