As networking trends yo-yo between layer-3 and layer-2, different protocols have emerged to address issues with large layer-2 networks. Protocols such as Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB), and Virtual Extensible LAN (VXLAN) have emerged to address the need for scalability at Layer2. Cloud scalability, spanning tree bridging issues, and big broadcast networks start to become a problem in a large data center or cloud environment.
To figure out if things like TRILL is a solution for you, you must understand the problem that is being addressed by TRILL. The same goes for the rest of the mentioned protocols. When it boils down to it the reason for looking at such protocols is you want high switching capacity, low latency, and redundancy. The current de facto standard of Spanning Tree Protocol (STP) simply is unable to meet the needs of modern layer2 networks. TRILL addresses the problem of STP’s ability to only allow one network path between switches or ports. STP prevents loops by managing active layer -2 paths. TRILL applies Intermediate System-to-Intermediate System protocol (IS-IS), which is a layer3 routing protocol translated to Layer 2 devices.
For those who say TRILL is not the answer things like SPB also known as 802.1aq, and VXLAN are the alternatives. A presentation at NANOG 50 in 2010 addressed some of the SPB vs TRILL debate. This presentation goes into great detail on the differences between the two.
The problem, which is one most folks overlook, is that you can only make a layer 2 network so flat. The trend for a while, especially in data centers, is to flatten out the network. Is TRILL better? Is SPB better? The problem isn’t what is the better solution to use. What needs to be addressed is the design philosophy behind why you need to use such things. Having large Layer2 networks is generally a bad idea. Scaling issues can almost always be solved by Layer-3.
So, and this is where the philosophy starts, is TRILL, SPB, or even VXLAN for you? Yes, but with a very big asterisk. TRILL is one of those stop-gap measures or one of those targeted things to use in specific instances. TRILL reduces complexity and makes layer-2 more robust when compared to MLAG. Where would you use such things? One common decision of whether to use TRILL or not comes in a virtualized environment such as VSPHERE.
Many vendors such as Juniper, have developed their own solutions to such things. Juniper and their Virtual Chassis solution do away with spanning tree issues, which is what TRILL addresses. Cisco has FabricPath, which is Cisco’s proprietary TRILL-based solution. Keep in mind, this is still TRILL. If you want to learn some more about Fabric Path this article by Joel Knight gets to the heart of Fabric path.
Many networks see VXLAN as their upgrade path. VXLAN allows layer 2 to be stretched across layer 3 boundaries. If you are a “Microsoft person” you probably hear an awful lot about Network Virtualization using Generic Routing Encapsulation (NVGRE) which can encapsulate a layer two frame into IP.
The last thing to consider in this entire debate is how does Software Defined Networking (SDN) play into this. Many folks think controllers will make ECMP and MLAG easy to create and maintain. If centralized controllers have a complete view of the network there is no longer a need to run protocols such as TRILL. The individual switch no longer makes the decision, the controller does.
Should you use Trill, VXLAN, or any of the others mentioned? If you have a large Layer-2 virtualized environment it might be something to consider. Are you an ISP, there is a very small case for running TRILL in anything other than your data center. Things such as Carrier Ethernet and MPLS are the way to go.
Lately, we have had a few clients run into signals becoming worse when they elevated clients to ePMP. This is not a result of the software being bad, but it enforcing the max EIRP on the units. This boils down to older devices compliant with original FCC grants which allowed unlimited EIRP. The Cambium elevate recognizes the latest grant for the devices. This grant allows for a max of 41 dBM on 5/10/20 mhz channels and 38dBM on 40mhz.
So if you have elevated some older devices from UBNT your signals may have dropped. This is due to compliance with the latest rules for the device. As our industry matures, becoming compliant will become more and more important. On the UBNT units, newer firmware from UBNT also does this.
Cambium has a forum post on this. http://community.cambiumnetworks.com/t5/ePMP-Elevate/5-8-GHz-Elevated-Devices-Maximum-EIRP-in-the-United-States/m-p/73141#M475
We have some tricks of the trade we can do. Contact MTIN for how we can help.
I had a client learn a lesson they should not have had to this evening. The client has had several key servers hosted at a small data center for several years now. These were managed servers the data center took care of. Things like new hard drives were the responsibility of the data center so the client rarely paid attention to these machines. As many of you know a server can spin for years and it is just forgotten about.
Tonight these servers come under a very heavy Denial of Service (DDoS) attack. Fifteen plus Gigs come to bear at client’s servers for an extended time. The client is unable to reach the data center NOC, nor do any of his contacts work. The servers are knocked offline. 4 hours later the client finally receives an e-mail from the data center saying they unplugged the client’s router because it was taking down their (the DC’s) own network. After asking to have a call from a manager client finds out the DC has restructured and dropped many of their co-location and other hosting services. Their multiple 10 gig pipes have been reduced to one, and many clients have left. The manager says they have re-focused their business to focus on things such as OLED screens, and other things totally unrelated to running a data center. The hosting they do have left “pays the bills” so they can have a place to do research.
The client has redundancy so they are not dead in the water. However, this redundancy was only supposed to be for a short term duration due to costs. The lesson learned is to keep in contact with your vital members. Call up your sales person once or twice a year and see how things are going. Keep in contact with key folks at the company. If they are on LinkedIn add the company. If their focus appears to change or they go silent do some leg work to find out what’s going on.
I saw two very different examples of how technology affects families today. The first was a very positive experience with my wife’s parents. Their satellite dish had become out of alignment. Amber and I were able to go over in the morning and quickly do some adjustments on the dish outside and get their TV service restored. As a result, we were able to have a nice lunch with them and sit and visit for a little bit. For the techies Reading this, it was just a simple alignment problem and a couple loose bolts and some turning of the alignment bolts brought it back in line. It’s the same thing we do all the time with wireless microwave backhauls.
The other example was a Family down the street. As I was driving home I passed their house and noticed probably ten of them were outside having a cookout. However, as I got closer to the House I noticed every single one of them at their heads down looking at their phones. No one was talking with each other. Maybe they were IMing each other, who knows.
Just some recent observations.
As the number of WISP LTE deployments increase, there are many things WISPs will need to be mindful of. One such item is properly supporting antenna cables. LTE systems are more sensitive to cable issues. In a previous blog post, I talked about pim and low-pim cables. One of the things that can cause low pim is improperly mated cables. If cables are not supported they can become loose over time. Vibration from equipment or even the wind can loosen connections.
How do we support cables?
We can take a cue from the cellular industry. The following are some examples of proper cable support. Thanks to Joshua Powell for these pics.
Where can you get these?
A good place to start are sites like sitepro1 or Tessco has a selection.
So the next time you are planning your LTE deployment think about cable support.
I had a simple network consisting of a Mikrotik hooked to an internet connection along with 3 APs behind it. Nothing fancy, The network was experiencing drop out in service. The internet would just stop. One of the most noticeable things would iPhones would drop the wireless link and revert back to LTE, or the internet would just stop working for them. This was happening on a very regular basis.
Wireless testing was done, new APs were added, but no one thought to check the ports on the headend router. Upon investigation of the logs this was found:
Normally this would be a slam dunk, however, there was nothing plugged in at all to ether4 to generate these areas. No cable, no nothing. If you disabled the port the errors would go away. Re-enable the port and they would come back. Upgrade and downgrade of the OS did not seem to fix the issue. A new headend router was installed and everything was back to working normally.
One of the topics that came up during the Baicells troubleshooting tips was the notion of PIM testing, and cables which are PIM rated.
PIM sweeps are a common thing in the Cellular field. One of the first questions folks often ask is what is a PIM sweep? If you think of PIM testing as a passive test and line sweeping as an active test that is a good start. PIM testing looks for problems with things like connectors, cables, and other “layer 1” items. A PIM test is not a line sweep. Line sweeping measures the signal losses and reflections of the transmission system. this is typically VSWR. A line sweep is an active test. It can not detect the same things a PIM test can. Many HAM radio folks are familiar with a line sweep where the reflected power is measure in an antenna system. In a line sweep you deal with reflected power and all that.
What does a PIM test do?
When you do a PIM test typical two high power signals are injected into the antenna line. You can actually pass a sweep test but not a PIM test.
I won’t go into PIM tests very much because you need high dollar units such as those from Anritsu and Kaelus. These cost 10’s of thousands of dollars new. Sometimes you can find these used. However, the next thing you will run into is understanding the output of such a device. Cell crews go to week long certification classes to become a PIM certified tech from Anritsu and others.
What causes a PIM test to fail?
According to Kaelus the most common problems are:
• Contaminated surfaces or contacts due to dirt, dust, moisture or oxidation.
• Loose mechanical junctions due to inadequate torque, poor alignment or poorly prepared contact surfaces.
• Loose mechanical junctions caused transportation shock or vibration .
• Metal flakes or shavings inside RF connections.
• Poorly prepared RF connections
•Trapped dielectric materials (adhesives, foam, etc.)
•Cracks or distortions at the end of the outer conductor of coaxial cables caused by over tightening the back nut during installation.
• Solid inner conductors distorted in the preparation process causing these to be out of round or tapered over the mating length.
• Hollow inner conductors excessively enlarged or made oval during the preparation process.
Why does cable matter?
Cables do not typically cause PIM, but poorly terminated or damaged cables can and do cause problems.
Cables with Seams can cause issues. The seam can corrode. Plated copper, found in cheaper cables, can break away from the aluminum core. This actually allows small amounts of flaking to happen between the connector and the core of the cable. This will cause PIM issues and is very hard to diagnose. Imagine little flakes inside a connector. You don’t see them until you break open the connector, and even then they may be pretty little flakes.
Cables can change their physical configuration as temperature varies. For instance, sunshine can warm cables, changing their electrical length. A cable that happens to be the right length to cancel out PIM when cool may show strong PIM after changing its length on a warm day, or, it can work the other way around, good when hot and bad when cold. In addition, the physical change in length can make a formerly good connection into a poor one, also generating PIM. Other environmental factors such as water in the connector or cable can be an issue, as with any RF setup.
I think I have PIM issues. What are some indications?
PIM often shows up as poor statistics from the affected antenna. One of the first and most direct indications of PIM can be seen in cells with two receive paths. If the noise floor is not equal between the two paths, the cause is likely PIM generated inside the noisy receive path.
How Do I prevent PIM issues?
Cable quality and connector quality are one of the biggest factors in the PIM quality of a LTE system. Many WISPs are used to making their own LMR cables and putting on their own connectors. There is a difference between a low PIM LMR-400 cable and normal LMR-400. Same for connectors. One of the recommendations today was to use 1/2” superflex heliax.
The easy recommendation is to buy pre-made cables that have already been PIM certified. In a typical WISP setup, you do not have lots and lot of components in your setup. Buy already certified components from your distributors that are “Low PIM rated”.
While trying to get my Playstation to download the latest “No Man’s Sky” download quicker I figured I would share a little torch action. This is showing my wife’s Ipad talking to Netflix while she is watching a streaming TV show. Keep in mind this is just an Ipad, not some 4k TV.
Some things to note as you watch this (no sound).
1.Uncapped the connection bursts to 50-60+ megs.
2.The slower your que the connection the more time it spends downloading data. At slower ques the bursts last longer.
3.If you are handing out IPv6 to customers you should be queing them as well.
Just something to quick and dirty to keep in mind.