The Importance of cable support in LTE deployments

As the number of WISP LTE deployments increase, there are many things WISPs will need to be mindful of.  One such item is properly supporting antenna cables. LTE systems are more sensitive to cable issues.  In a previous blog post, I talked about pim and low-pim cables.   One of the things that can cause low pim is improperly mated cables.  If cables are not supported they can become loose over time.  Vibration from equipment or even the wind can loosen connections.

How do we support cables?
We can take a cue from the cellular industry. The following are some examples of proper cable support.  Thanks to Joshua Powell for these pics.

Where can you get these?
A good place to start are sites like sitepro1 or Tessco has a selection.

So the next time you are planning your LTE deployment think about cable support.

The importance of checking layer one and two

I had a simple network consisting of a Mikrotik hooked to an internet connection along with 3 APs behind it.  Nothing fancy,  The network was experiencing drop out in service.  The internet would just stop.  One of the most noticeable things would iPhones would drop the wireless link and revert back to LTE, or the internet would just stop working for them.  This was happening on a very regular basis.

Wireless testing was done, new APs were added, but no one thought to check the ports on the headend router.  Upon investigation of the logs this was found:

Normally this would be a slam dunk, however, there was nothing plugged in at all to ether4 to generate these areas.  No cable, no nothing.  If you disabled the port the errors would go away.  Re-enable the port and they would come back.  Upgrade and downgrade of the OS did not seem to fix the issue.  A new headend router was installed and everything was back to working normally.

WISP LTE, PIM testing, and quality

One of the topics that came up during the Baicells troubleshooting tips was the notion of PIM testing, and cables which are PIM rated.

PIM sweeps are a common thing in the Cellular field.   One of the first questions folks often ask is what is a PIM sweep? If you think of PIM testing as a passive test and line sweeping as an active test that is a good start.  PIM testing looks for problems with things like connectors, cables, and other “layer 1” items.  A PIM test is not a line sweep. Line sweeping measures the signal losses and reflections of the transmission system. this is typically VSWR.  A line sweep is an active test. It can not detect the same things a PIM test can.  Many HAM radio folks are familiar with a line sweep where the reflected power is measure in an antenna system. In a line sweep you deal with reflected power and all that.

What does a PIM test do?

When you do a PIM test typical two high power signals are injected into the antenna line.  You can actually pass a sweep test but not a PIM test.

I won’t go into PIM tests very much because you need high dollar units such as those from Anritsu and Kaelus. These cost 10’s of thousands of dollars new.  Sometimes you can find these used.  However, the next thing you will run into is understanding the output of such a device.  Cell crews go to week long certification classes to become a PIM certified tech from Anritsu and others.

What causes a PIM test to fail?

According to Kaelus the most common problems are:

• Contaminated surfaces or contacts due to dirt, dust, moisture or oxidation.
• Loose mechanical junctions due to inadequate torque, poor alignment or poorly prepared contact surfaces.
• Loose mechanical junctions caused transportation shock or vibration .
• Metal flakes or shavings inside RF connections.
• Poorly prepared RF connections
•Trapped dielectric materials (adhesives, foam, etc.)
•Cracks or distortions at the end of the outer conductor of coaxial cables caused by over tightening the back nut during installation.
• Solid inner conductors distorted in the preparation process causing these to be out of round or tapered over the mating length.
• Hollow inner conductors excessively enlarged or made oval during the preparation process.

Why does cable matter?

Cables do not typically cause PIM, but poorly terminated or damaged cables can and do cause problems.

Cables with Seams can cause issues.  The seam can corrode.  Plated copper, found in cheaper cables, can break away from the aluminum core. This actually allows small amounts of flaking to happen between the connector and the core of the cable.  This will cause PIM issues and is very hard to diagnose. Imagine little flakes inside a connector. You don’t see them until you break open the connector, and even then they may be pretty little flakes.

Cables can change their physical configuration as temperature varies. For instance, sunshine can warm cables, changing their electrical length. A cable that happens to be the right length to cancel out PIM when cool may show strong PIM after changing its length on a warm day, or, it can work the other way around, good when hot and bad when cold. In addition, the physical change in length can make a formerly good connection into a poor one, also generating PIM. Other environmental factors such as water in the connector or cable can be an issue, as with any RF setup.

I think I have PIM issues. What are some indications?

PIM often shows up as poor statistics from the affected antenna. One of the first and most direct indications of PIM can be seen in cells with two receive paths. If the noise floor is not equal between the two paths, the cause is likely PIM generated inside the noisy receive path.

How Do I prevent PIM issues?

Cable quality and connector quality are one of the biggest factors in the PIM quality of a LTE system.  Many WISPs are used to making their own LMR cables and putting on their own connectors.  There is a difference between a low PIM LMR-400 cable and normal LMR-400.  Same for connectors.  One of the recommendations today was to use 1/2” superflex heliax.

The easy recommendation is to buy pre-made cables that have already been PIM certified.  In a typical WISP setup, you do not have lots and lot of components in your setup. Buy already certified components from your distributors that are “Low PIM rated”.

Netflix, IPv6, and queing

While trying to get my Playstation to download the latest “No Man’s Sky” download quicker I figured I would share a little torch action.  This is showing my wife’s Ipad talking to Netflix while she is watching a streaming TV show. Keep in mind this is just an Ipad, not some 4k TV.

Some things to note as you watch this (no sound).

1.Uncapped the connection bursts to 50-60+ megs.
2.The slower your que the connection the more time it spends downloading data.  At slower ques the bursts last longer.
3.If you are handing out IPv6 to customers you should be queing them as well.

Just something to quick and dirty to keep in mind.

ethernet MTU and overhead

One of the most common questions is how much overhead do I need to account for on my transport network? I have put together a quick list to help when you are calculating your overhead.

-GRE (IP Protocol 47) (RFC 2784): 24 bytes (20 byte IPv4 header, 4 byte GRE header)
-6in4 encapsulation (IP Protocol 41, RFC 4213): 20 bytes
-4in6 encapsulation (e.g. DS-Lite RFC 6333): 40 bytes
Addition IPv4 header:20 bytes
-IPsec encryption:
73 bytes for ESP-AES-256 and ESP-SHA-HMAC overhead (overhead depends on transport or tunnel mode and the encryption/authentication algorithm and HMAC)
-MPLS: 4 bytes for each label in the stack
-802.1Q tag: 4 bytes
Q-in-Q: 8 bytes
-VXLAN: 50 bytes
-OTV: 42 bytes

Some rules of thumb when setting MTUs. You won’t get fragmentation if your L2 MTU is higher than your L3 MTU. This is just not the setting, but the actual overhead in use. Just setting it to a number doesn’t necessarily make it right. The above list will help you calculate the minimum MTU you may need. I try to get gear that supports a 1548 MTU and set everything to that. Makes it simple. I still want to know how much MTU I am utilizing because it helps me validate my designs.
The most important rule of thumb is you won’t get fragmentation if your l3 MTU is less than your L2 MTU.

BGP local Pref and you

One of the bgp topics that comes up from time to time is what does “bgp local-pref” do for me? The short answer is it allows you to prefer which direction a traffic will flow to a given destination. How can this help you? Well before we start, remember the high number wins in local-pref.
Let’s assume you are an ISP. You have the following connections:
-You supply a BGP connection to a downstream client.
-You have a private peer setup with the local college
-You are hooked into a local internet exchange
-You have transport to another internet exchange in the next state over
-and you have some transit connections where you buy internet.

So how do we use BGP preference to help us out? We might apply the following rules to routes received from our various peers
Our downstream client we might set their local pref to 150
The college we may set them to 140
Preferred internet exchange peering: 130
Next state IX: 120
Transit ISPs: 100

Now these don’t make much sense by themselves, but they do when you take into account how BGP would make a decision if it has to choose between multiple paths. If it only has one path to a certain route then local-pref is not relevant.

Let’s say you have a customer on your ISP that is sending traffic to a server at a local college. Maybe they are a professor who is remoting into a server at the college to run experiments. There are probably multiple ways for that traffic to go. If the college is on the local Internet exchange you are a member of, that is one route, the next route would be your transit ISPs, and obviously your private peer with the college. So, in our example above the college, with a local pref of 140 wins out over the local exchange, wins out over the next state IX, and wins out over the Transit ISPs. We want it to go direct over the direct peer with the college. Mission accomplished.

local-pref is just one way to engineer your traffic to go out certain links. Keep in mind two things:
1.Higher number wins
2.local-pref only matters if there are multiple paths to the same destination.
3.Local-pref has to do with outbound path selection

12 days of netmas

On the first day of netmas
my true love sent to me:
A spanning tree instance

On the second day of netmas
my true love sent to me:
2 ethernet ends
and a spanning tree instance

On the third day of netmas
my true love sent to me:
3 sfps
2 ethernet ends
and a spanning tree instance

On the fourth day of netmas
my true love sent to me:
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the fifth day of netmas
my true love sent to me:
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the sixth day of netmas
my true love sent to me:
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the seventh day of netmas
my true love sent to me:
7 OSPF areas
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the eighth day of netmas
my true love sent to me:
8 packets a flowing
7 OSPF areas
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the ninth day of netmas
my true love sent to me:
9 fans a cooling
8 packets a flowing
7 OSPF areas
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the tenth day of netmas
my true love sent to me:
10 gigs a flowing
9 fans a cooling
8 packets a flowing
7 OSPF areas
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the eleventh day of netmas
my true love sent to me:
11 BGP Peers
10 gigs a flowing
9 fans a cooling
8 packets a flowing
7 OSPF areas
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

On the twelveth day of netmas
my true love sent to me:
12 routers on a stick
11 BGP Peers
10 gigs a flowing
9 fans a cooling
8 packets a flowing
7 OSPF areas
6 switches switching
5 poe injectors
4 subnet masks
3 sfps
2 ethernet ends
and a spanning tree instance

Arin changes fees for transfer requests of number resources

Beginning 1 January 2017, ARIN will collect a $300 USD, non-refundable processing fee for each transfer request of Internet number resources, including:

   * 8.2 Merger, Acquisition, and Reorganization transfers; billed to the source (or legal successor) organization.

   * 8.3 Transfers to Specified Recipients within the ARIN region, billed to the source-side organization. The Transfer processing fee is waived when the subject resources are under an existing Registration Services Plan (RSP), and no specific transfer processing fee will be charged to the recipient-side organization.

   * 8.4 Inter-RIR Transfers to Specified Recipients, a fee is billed to the source-side organization if within the ARIN region. This transfer processing fee is waived when the subject resources are under an existing Registration Services Plan (RSP).  No specific transfer processing fee will be charged to recipient-side organizations.

This fee will be invoiced to the source organization’s billing Point of Contact (POC) and are to be paid before request evaluation begins. It will replace the current $500 resource transfer fee on the existing fee schedule. https://www.arin.net/fees/fee_schedule.html

Transferred resources will be subject to annual fees as stipulated by the fee schedule, including registry maintenance fees or corresponding Registration Services Plan. Additional fees may apply based on the status of the source or recipient organization at the time of transfer.

This change arose out of a community consultation, which is available for review at:

https://arin.net/participate/acsp/community_consult/09-01-2016_transferfee.html

If you have additional questions, please contact ARIN Financial Services via Ask ARIN, while logged into your ARIN Online account.