Fluke networks has fired another volley in the “zip ties vs. velcro” for cables front. While this article does not address velcro vs Zip ties directly, it does bring up some points about using zip ties.
For those of you not so familiar with routers
The new ANSI/TIA-568.2-D cabling standard which now allows for the use of 28 AWG patch cords. What does this mean and how does it affect you? Read this article from Fluke networks.
Number one takeaway.
-Recommended length no more than 15 meters. This means it is great for dense racks and patch panels.
Many ISPs run into this problem as part of their growing pains. This scenario usually starts happening with their third or 4th peer.
Scenario. ISP grows beyond the single connection they have. This can be 10 meg, 100 meg, gig or whatever. They start out looking for redundancy. The ISP brings in a second provider, usually at around the same bandwidth level. This way the network has two pretty equal paths to go out.
A unique problem usually develops as the network grows to the point of peaking the capacity of both of these connections. The ISP has to make a decision. Do they increase the capacity to just one provider? Most don’t have the budget to increase capacities to both providers. Now, if you increase one you are favouring one provider over another until the budget allows you to increase capacity on both. You are essentially in a state where you have to favor one provider in order to keep up capacity. If you fail over to the smaller pipe things could be just as bad as being down.
This is where many ISPs learn the hard way that BGP is not load balancing. But what about padding, communities, local-pref, and all that jazz? We will get to that. In the meantime, our ISP may have the opportunity to get to an Internet Exchange (IX) and offload things like streaming traffic. Traffic returns to a little more balance because you essentially have a 3rd provider with the IX connection. But, they growing pains don’t stop there.
As ISP’s, especially WISPs, have more and more resources to deal with cutting down latency they start seeking out better-peered networks. The next growing pain that becomes apparent is the networks with lots of high-end peers tend to charge more money. In order for the ISP to buy bandwidth they usually have to do it in smaller quantities from these types of providers. This introduces the probably of a mismatched pipe size again with a twist. The twist is the more, and better peers a network has the more traffic is going to want to travel to that peer. So, the more expensive peer, which you are probably buying less of, now wants to handle more of your traffic.
So, the network geeks will bring up things like padding, communities, local-pref, and all the tricks BGP has. But, at the end of the day, BGP is not load balancing. You can *influence* traffic, but BGP does not allow you to say “I want 100 megs of traffic here, and 500 megs here.” Keep in mind BGP deals with traffic to and from IP blocks, not the traffic itself.
So, how does the ISP solve this? Knowing about your upstream peers is the first thing. BGP looking glasses, peer reports such as those from Hurricane Electric, and general news help keep you on top of things. Things such as new peering points, acquisitions, and new data centers can influence an ISPs traffic. If your equipment supports things such as netflow, sflow, and other tools you can begin to build a picture of your traffic and what ASNs it is going to. This is your first major step. Get tools to know what ASNs the traffic is going to You can then take this data, and look at how your own peers are connected with these ASNs. You will start to see things like provider A is poorly peered with ASN 2906.
Once you know who your peers are and have a good feel on their peering then you can influence your traffic. If you know you don’t want to send traffic destined for ASN 2906 in or out provider A you can then start to implement AS padding and all the tricks we mentioned before. But, you need the greater picture before you can do that.
One last note. Peering is dynamic. You have to keep on top of the ecosystem as a whole.
Below, We have some visio diagrams we have done for customers.
This first design is a customer mesh into a couple of different data centers. We are referring to this as a switch-centric design. This has been talked about in the forums and switch-centric seems like as good as any.
This next design is a netonix switch and a Baicells deployment.
We like to refer to Indianapolis, Indiana as an “NFL City” when explaining the connectivity and peering landscape. It is not a large network presence like Chicago or Ashburn but has enough networks to make it a place for great interconnects.
At the heart of Indianapolis is the Indy Telcom complex. www.indytelcom.com (currently down as of this writing). This is also referred to as the “Henry Street” complex because West Henry Street runs past several of the buildings. This is a large complex with many buildings on it.
One of the things many of our clients ask about is getting connectivity from building to building on the Indy Telcom campus. Lifeline Data Centers ( www.lifelinedatacenters.com ) operates a carrier hotel at 733 Henry. With at least 30 on-net carriers and access to many more 733 is the place to go for cross-connect connectivity in Indianapolis. We have been told by Indy Telcom the conduits between the buildings on the campus are 100% full. This makes connectivity challenging at best when going between buildings. The campus has lots of space, but the buildings are on islands if you wish to establish dark fiber cross-connects between buildings. Many carriers have lit services, but due to the ways many carriers provision things getting a strand, or even a wave is not possible. We do have some options from companies like Zayo or Lightedge for getting connectivity between buildings, but it is not like Chicago or other big Date centers. However, there is a solution for those looking for to establish interconnections. Lifeline also operates a facility at 401 North Shadeland, which is referred to as the EastGate facility. This facility is built on 41 acres, is FEDRAMP certified, and has a bunch of features. There is a dark fiber ring going between 733 and 401. This is ideal for folks looking for both co-location and connectivity. Servers and other infrastructure can be housed at Eastgate and connectivity can be pulled from 733. This solves the 100% full conduit issue with Indy Telcom. MidWest Internet Exchange ( www.midwest-ix.com ) is also on-net at both 401 and 733.
Another location where MidWest-IX is at is 365 Data Centers (http://www.365datacenters.com ) at 701 West Henry. 365 has a national footprint and thus draws some different clients than some of the other facilities. 365 operates Data centers in Tennessee, Michigan, New York, and others. MidWest has dark fiber over to 365 in order to bring them on their Indy fabric.
Another large presence at Henry Street is Lightbound ( www.lightbound.com ). They have a couple of large facilities. According to PeeringDB, only three carriers are in their 731 facility. However, their web-site lists 18+ carriers in their facilities. The web-site does not list these carriers.
I am a big fan of peeringdb for knowing who is at what facilities, where peering points are, and other geeky information. Many of the facilities in Indianapolis are not listed on peering DB. Some other Data Centers which we know about:
On the north side of Indianapolis, you have Expedient ( www.expedient.com ) in Carmel. Expedient says they have “dozens of on net carriers among all markets”. There are some other data centers in the Indianapolis Metro area. Data Cave in Columbus is within decent driving distance.
A few days ago Homeland Security published an e-mail on threats to network devices and securing them. Rather than cut and paste I exported the e-mail to a PDF. Some good best practices in here.
So today UPS dropped off a brand new EdgeSwitch 16XG. I won’t bore you with all the cool stats. You can read the official product literature here. This is just a first look. Future posts will dive into configuration, testing, and other such things. For those wanting the cliff notes version of what this switch is about:
- (12) SFP+ Ports
- (4) 10G RJ45 Ports
- (1) RJ45 Serial Console Port
- Non-Blocking Throughput: 160 Gbps
- Switching Capacity: 320 Gbps
- Forwarding Rate: 238.10 Mpps
- (12) 1/10 Gbps SFP+ Ethernet Ports
- (4) 1/10 Gbps RJ45 Ethernet Ports
- Rack Mountable with Rack-Mount Brackets (Included)
- DC Input Option (Redundant or Stand-Alone)
UBNT is following a natural trend in the switch world. As more and more networks are looking at 1Gig being their minimum, the switches are reflecting this. Gone are the days of 10/100 ports. Now are going toward 1/10 gig ports, even on copper. 10/100/1000 switches still have their place, but usually not on switches with 10 gig ports.
Out of the box the switch isn’t anything sexy. I feel like it should have a shiny UBNT logo somewhere.
I like the fact that none of the ports are shared ports. You can use all 16 ports. It always annoys me when I buy a switch and can’t use all the ports because they are shared on the bus.
An interesting feature on this switch is a redundant DC input option. This can be anything from 16-25volts and be able to support 56watts. This results in a minimum of a 2.2 Amp power supply. This is assuming a full load on the switch as well. For the WISP market this could be a very handy option. You could install the switch where it is drawing from AC power but in the event of AC outage it will switch to a DC source. One of my questions to UBNT is if you can run it off total DC.
Now on to some nitpicky design things. None of these really affect the performance of the switch, just are annoyances.
-The console port not being on the front. In today’s dense rack environments we are putting patch panels and Transfer switches in the backs of the rack. If we have to get to the back of the front mounted devices then anything other than power becomes an annoyance. This is not an issue if you install every new switch with a console cable back to a console server like we do, but even that doesn’t always happen.
-The SFP cages should stick out just a tad from the front. During inserting and re-inserting SFPs I actually pushed the cage back a little. This resulted in some of the SFPs not clicking in correctly. The little tabs holding the top of the SFP cages aren’t sturdy enough to hold some repeated clicking in and out.
After seeing this I was prompted to open the switch and see what is under the hood.
I think this will be a hugely popular switch for anybody looking to do 10Gig. At a $600 approximate price these are, by far, the most cost effective 10 Gig switch out there. Many manufacturers have tacked on one or two, sometimes 4 SFP+ ports, but if you need to go beyond that you are talking 4 digit pricing. This is something we have struggled with MidWest-IX. It usually leads to us buying something on the used market that has the port density we need.
There you have it for a first look at this switch. More articles to follow that include:
-Questions I and you, the reader, have for UBNT
Direct from their web-site.
How to Report
when reporting for a service outage. Once verified we will plot it on tracker.
For e.g. #outage #loc (street, city – location name) #start (time), followed by #back (time)#planned or #unplanned (if its a planned or unexpected outage).
Send comments/feedback/feature requests tovirendra[dot]rode[at]outages.org