Reflections on the networking industry, part 2: On CLI, APIs and SNMP

In the previous post I briefly described the fact that many networks today are closed and vertically designed. While standard protocols are being adopted by vendors, true interoperability is still a challenge. Sure, you can bring up a BGP peer between platforms from different vendors and exchange route information (otherwise we couldn’t scale the Internet), but management and configuration is still, in most cases, vendor specific.

Every network engineer out there got to respect the CLI. We sometimes love them and sometimes hate them, but we all tend to master them. The glorious way of interacting with a network device, even in 2015. Some common properties of CLIs are –

  1. They are vendor, and sometimes even device, specific;
  2. They are not standardized; there is no standard for setting up the data or for displaying the text
  3. They don’t have a strict notion of versioning or guarantee backward compatibility;
  4. They can change between software releases;

All of the above make CLIs an acceptable solution up to a certain scale. With large-scale networks automation is a key part and usually mandatory. But giving the properties mentioned above, automating a network device configuration based on CLI commands isn’t a trivial task.

Today, you can see more and more vendors that support other protocols such as NETCONF or REST for interacting with their devices. The impression is that you suddenly have a proper API and a standard method to communicate with the devices. Reality is that with such protocols you do have a standard transport to interact with a device, but you still do not have an API, with each device/vendor still represents data differently as brilliantly described by Jason Edelman in this blog post.

We, as an industry, must agree on a standard way for representing the network data. No more vendor-specific implementations, but true, open, models. The last major try was with SNMP, the Simple Network Management Protocol, which is anything but simple. Most people just turn it off, or use it to capture (read: poll) very basic information from a device. Anything more complex than that, not to mention device configuration, requires installation of vendor specific MIBs – and we are back to the same problem.


Reflections on the networking industry, part 1: Welcome to vendor land

I have been involved with networking for quite some time now; I have had the opportunity to design, implement and operate different networks across different environments such as enterprise, data-center, and service provider – which inspired me to create this series of short blog posts exploring the computer networking industry. My view on the history, challenges, hype and reality, and most importantly – what’s next and how we can do better.

Part 1: Welcome to vendor land

Protocols and standards were always a key part of networking and were born out of necessity: we need different systems to be able to talk to each other.

Modern networking suite is built around Ethernet and TCP/IP stack, including TCP, UDP, and ICMP – all riding on top of IPv4 or IPv6. There is a general consensus that Ethernet and TCP/IP won the race against the other alternatives. This is great, right? Well, the problem is not with Ethernet or the TCP/IP stack, but with their “ecosystem”: a long list of complementary technologies and protocols.

Getting the industry to agree on the base layer 2, layer 3 and layer 4 protocols and their header format was indeed a big thing, but we kind of stopped there. Say you have got a standard-based Ethernet link. How would you bring it up and negotiate its speed? And what about monitoring, loop prevention, or neighbor discovery? Except for the very basic, common denominator functionality, vendors came out with their own set of proprietary protocols for solving these issues. Just from the top of my mind: ISL, VTP, DTP, UDLD, PAgP, CDP, and PVST are all examples of the “Ethernet ecosystem” from one (!) vendor.

True, you can find standard alternatives for the mentioned protocols today. Vendors are embracing open standards and tend to replace their proprietary implementation with a standard one if available. But why not to start with the standard one to begin with?

If you think that these are just historical examples from a different era, think again. Even in the 2010s decade, more and more protocols are being developed and/or adopted by single vendors only. I usually like to point out MC-LAG as an example of a fairly recent and very common architecture with no standard-based implementation. This feature alone can lead you to choose one vendor (or even one specific hardware model from one vendor) across your entire network, resulting in a perfect vendor lock-in.