SDN with Red Hat OpenStack Platform: OpenDaylight Integration

A short post I wrote for the Red Hat Stack blog, on what Red Hat is doing with OpenStack and OpenDaylight. Read more here: SDN with Red Hat OpenStack Platform: OpenDaylight Integration.

Advertisements

Networking sessions in Red Hat Summit 2016

I recently attended the Red Hat Summit 2016 event that took place at San Francisco, CA, on June 27-30. Red Hat Summit is a great place to interact with customers, partners, and product leads, and learn about Red Hat and the company’s direction.

While Red Hat is still mostly known for its Enterprise Linux (RHEL) business, it also offers products and solutions in the cloud computing, virtualization, middleware, storage, and systems management spaces. And networking is really a key piece in all of these.

In this short post I wanted to highlight a few sessions which are relevant to networking and were presented during the event. While video recordings are not available, slide decks can be downloaded in a PDF format (links included below).

  • Software-defined networking (SDN) fundamentals for NFV, OpenStack, and containers
    • Session overview: With software-defined networking (SDN) gaining traction, administrators are faced with technologies that they need to integrate into their infrastructure. Red Hat Enterprise Linux offers a robust foundation for SDN implementations that are based on an open source standard technologies and designed for deploying containers, OpenStack, and network function virtualization (NFV). We’ll dissect the technology stack involved in SDN and introduce the latest Red Hat Enterprise Linux options designed to address the packet processing requirements of virtual network functions (VNFs), such as Open vSwitch (OVS), single root I/O virtualization (SR-IOV), PCI Passthrough, and DPDK accelerated OVS.
    • Slides

————————————————————————-

  • Use Linux on your whole rack with RDO and open networking
    • Session overview: OpenStack networking is never easy–each new release presents new challenges that are hard to keep up with. Come see how open networking using Linux can help simplify and standardize your RDO deployment. We will demonstrate spine/leaf topology basics, Layer-2 and Layer-3 trade-offs, and building your deployment in a virtual staging environment–all in Linux. Let us demystify your network.
    • Slides

————————————————————————-

  • Extending full stack automation to the physical network
    • Session overview: In this session, we’ll talk about the unique operational challenges facing organizations considering how to encompass the physical network infrastructure when implementing agile practices. We’ll focus on the technical and cultural challenges facing this transition, including how Ansible is uniquely architected to serve as the right foundational framework for powering this change. We’ll touch on why it’s more important than ever that organizations embrace the introduction of new automated orchestration capabilities and start moving away from traditional command and control network device administration being done hop by hop. You’ll see some some of the theories in action and touch on expanding configuration automation to include elements of state validation of configuration changes. Finally, we’ll touch on the changing role of network engineering and operations teams and why their expertise is needed now more than ever to lead this transition.
    • Slides

————————————————————————-

————————————————————————-

  • Telco breakout: Reliability, availability, and serviceability at cloud scale
    • Session overview: Many operators are faced with fierce market competition that is attracting their customers with personalized alternatives. Technologies, like SDN, NFV, and 5G, hold the key to adapting to the networks of the future. However, operators are also looking to ensure that they can continue to offer the service-level guarantees their customers expect.With the advent of cloud-based service infrastructures, building secure, fault-tolerant, and reliable networks that deliver five nines (99.999%) service availability in the same way they have done for years has become untenable. The goal of zero downtime is still the same, as every hour of it is costly to service providers and their customers. As we continually move to new levels of scale, service providers and their customers expect that infrastructure failures will occur and are pro-actively changing their development and operational strategies. This session will explore these industry challenges and how service providers are applying new technologies and approaches to achieve reliability, availability, and serviceability at cloud scale. Service providers and vendors will join us to share their views on this complex topic and explain how they are applying and balancing the use of open source innovations, resilient service and application software, automation, DevOps, service assurance, and analytics to add value for their customers and business partners.
    • Slides

————————————————————————-

  • Red Hat Enterprise Linux roadmap
    • Session overview: Red Hat Enterprise Linux is the premier Linux distribution, known for reliability, security, and performance. Red Hat Enterprise Linux is also the underpinning of Red Hat’s efforts in containers, virtualization, Internet of Things (IoT), network function virtualization (NFV), Red Hat Enterprise Linux OpenStack Platform, and more. Learn what’s new and emerging in this powerful operating system, and how new function and capability can help in your environment.
    • Slides

————————————————————————-

  • Repeatable, reliable OpenStack deployments: Pipe dream or reality?
    • Session overview: Deploying OpenStack is an involved, complicated, and error-prone process, especially when deploying a physical Highly Available (HA) cluster with other software and hardware components, like Ceph. Difficulties include everything from hardware selection to the actual deployment process. Dell and Red Hat have partnered together to produce a solution based on Red Hat Enterprise Linux OSP Director that streamlines the entire process of setting up an HA OpenStack cluster. This solution includes a jointly developed reference architecture that includes hardware, simplified Director installation and configuration, Ceph storage backed by multiple back ends including Dell SC and PS series storage arrays, and other enterprise features–such as VM instance HA and networking segregation flexibility. In this session, you’ll learn how this solution drastically simplifies standing up an OpenStack cloud.
    • Slides

————————————————————————-

  • Running a policy-based cloud with Cisco Application Centric Infrastructure, Red Hat OpenStack, and Project Contiv
    • Session overview:  Infrastructure managers are constantly asked to push the envelope in how they deliver cloud environments. In addition to speed, scale, and flexibility, they are increasingly focused on both security and operational management and visibility as adoption increases within their organizations. This presentation will look at ways Cisco and Red Hat are partnering together to deliver policy-based cloud solutions to address these growing challenges. We will discuss how we are collaborating in the open source community and building products to based on this collaboration. It will cover topics including:
      • Group-Based Policy for OpenStack
      • Cisco Application Centric Infrastructure (ACI) with Red Hat OpenStack
      • Project Contiv and its integration with Cisco ACI
    • Slides

 

 

NFV and Open Networking with RHEL OpenStack Platfrom

(This is a summary version of a talk I gave at Intel Israel Telecom and NFV event on December 2nd, 2015. Slides are available here)

I was honored to be invited to speak on a local Intel event about Red Hat and what we are doing in the NFV space. I only had 30 minutes, so I tried to provide a high level overview of our offering, covering some main points:

  • Upstream first approach and why we believe it is a fundamental piece in the NFV journey; this is not a marketing pitch but really how we deliver our entire product portfolio
  • NFV and OpenStack; I touched on the fact that many service providers are asking for OpenStack-based solutions, and that OpenStack is the de-facto choice for VIM. That said, there are some limitations today (both cultural and technical) with OpenStack and clearly we have a way to go to make it a better engine for the telco needs
  • Full open source approach to NFV; it’s not just OpenStack but also other key projects such as QEMU/KVM, Open vSwitch, DPDK, libvirt, and the underlying Linux operating system. It’s hard to coordinate across these different communities, but this is what we are trying to do, with active participants on all of those
  • Red Hat product focus and alignment with OPNFV
  • Main use-cases we see in the market (atomic VNFs, vCPE, vEPC) with a design example of vPGW using SR-IOV
  • What telco and NFV specific features were introduced in RHEL OpenStack Platform 7 (Kilo) and what is planned for OpenStack Platform 8 (Liberty); as a VIM provider we want to offer our customers and the Network Equipment Providers (NEPs) maximum flexibility for packet processing options with PCI Passthrough, SR-IOV, Open vSwitch and DPDK-accelerated Open vSwitch based solutions.

Thanks to Intel Israel for a very interesting and well-organized event!

 

Neutron networking with Red Hat Enterprise Linux OpenStack Platform

(This is a summary version of a talk I gave at Red Hat Summit on June 25th, 2015. Slides are available here)

I was honored to speak the second time in a row on Red Hat Summit, the premier open source technology event hosted in Boston this year. As I am now focusing on product management for networking in Red Hat Enterprise Linux OpenStack Platform I presented Red Hat’s approach to Neutron, the OpenStack networking service.

Since OpenStack is fairly a new project and a new product on Red Hat’s portfolio, I was not sure what level of knowledge to expect from my audience. Therefore I have started with a fairly basic overview of Neutron – what it is and what are some of the common features you can get from its API today. I was very happy to see that most of the people at the audience seemed to be already familiar with OpenStack and with Neutron so the overview part was quick.

The next part of my presentation was a deep dive into Neutron when deployed with the ML2/Open vSwitch (OVS) plugin. This is our default configuration when deploying Red Hat Enterprise Linux OpenStack Platform today, and like any other Red Hat products, based on fully open-source components. Since there is so much to cover here (and I only had one hour for the entire talk), I focused on the core elements of the solution, and the common features we see customers using today: L2 connectivity, L3 routing and NAT for IPv4, and DHCP for IP address assignment. I explained the theory of operation and used some graphics to describe the backend implementation and how things look on the OpenStack nodes.

OVS-based solution is our default, but we are also working with a very large number of leading vendors in the industry providing their own solutions through the use of Neutron plugins. I spent some time to describe the various plugins out there, our current partner ecosystem, and Red Hat’s certification program for 3rd party software.

I then covered some of the major recent enhancements introduced in Red Hat Enterprise Linux OpenStack Platform 6 based on the upstream Juno code base: IPv6 support, L3 HA, and distributed virtual router (DVR) – which is still a Technology Preview feature, yet very interesting to our customers.

Overall, I was very happy with this talk and with the number of questions I got in the end. It looks like OpenStack is happening, and more and more customers are interested to find out more about it. See you next year in San Francisco for Red Hat Summit 2016!

IPv6 Prefix Delegation – what is it and how does it going to help OpenStack?

IPv6 offers several ways to assign IP addresses to end hosts. Some of them (SLAAC, stateful DHCPv6, stateless DHCPv6) were already covered in this post. The IPv6 Prefix Delegation mechanism (described in RFC 3769 and RFC 3633) provides “a way of automatically configuring IPv6 prefixes and addresses on routers and hosts” – which sounds like yet another IP assignment option. How does it differ from the other methods? And why do we need it? Let’s try to figure it out.

Understanding the problem

I know that you still find it hard to believe… but IPv6 is here, and with IPv6 there are enough addresses. That means that we can finally design our networks properly and avoid using different kinds of network address translation (NAT) in different places across the network. Clean IPv6 design will use addresses from the Global Unicast Address (GUA) range, which are routable in the public Internet. Since these are globally routed, care needs to be taken to ensure that prefixes configured by one customer do not overlap with prefixes chosen by another.

While SLAAC or DHCPv6 enable simple and automatic host configuration, they do not provide specification to automatically delegate a prefix to a customer site. With IPv6, there is a need to create a hierarchical model in which the service provider allocates prefixes from a set of pools to the customer. The customer then assign addresses to its end systems out of the predefined pool. This is powerful, as it provides the service provider with control over the IPv6 prefixes assignment, and could eliminate potential conflicts in prefix selection.

How does it work?

With Prefix Delegation, a delegating router (Prefix Delegation Server) delegates IPv6 prefixes to a requesting router (Prefix Delegation Client). The requesting router then uses the prefixes to assign global IPv6 addresses to the devices on its internal interfaces. Prefix Delegation is useful when the delegating router does not have information about the topology of the networks in which the requesting router is located. The delegating router requires only the identity of the requesting router to choose a prefix for delegation. Prefix Delegation is not a new protocol. It is using DHCPv6 messages as defined in RFC 3633, thus sometimes referred to as DHCPv6 Prefix Delegation.

DHCPv6 prefix delegation operates as follows:

  1. A delegating router (Server) is provided with IPv6 prefixes to be delegated to requesting routers.
  2. A requesting router (Client) requests one or more prefixes from the delegating router.
  3. The delegating router (Server) chooses prefixes for delegation, and responds with prefixes to the requesting router (Client).
  4. The requesting router (Client) is then responsible for the delegated prefixes.
  5. The final address allocation mechanism in the local network can be performed with SLAAC or stateful/stateless DHCPv6, based on the customer preference. At this step the key thing is the IPv6 prefix and not how it is delivered to end systems.

IPv6 in OpenStack Neutron

Back in the Icehouse development cycle, the Neutron “subnet” API was enhanced to support IPv6 address assignment options. Reference implementation of this followed at the Juno cycle, where dnsmasq and radvd processes were chosen to serve the subnets with RAs, SLAAC or DHCPv6.

In the current Neutron implementation, tenants must supply a prefix when creating subnets. This is not a big deal for IPv4, as tenants are expected to pick private IPv4 subnets for their networks and NAT is going to take place anyway when reaching external public networks. For IPv6 subnets that use Global Unicast Address (GUA) format, addresses are globally routable and cannot overlap. There is no NAT or floating IP model for IPv6 in Neutron. And if you ask me, there should not be one. GUA is the way to go. But can we just trust the tenants to configure their IPv6 prefixes correctly? Probably not, and that’s why Prefix Delegation is an important feature for OpenStack.

An OpenStack administrator may want to simplify the process of subnet prefix selection for the tenants by automatically supplying prefixes for IPv6 subnets from one or more large pools of pre-configured IPv6 prefixes. The tenant would not need to specify any prefix configuration. Prefix Delegation will take care of the address assignment.

The code is expected to land in OpenStack Liberty based on this specification. Other than REST API changes, a PD client would need to run in the Neutron router network namespace whenever a subnet attached to that router requires prefix delegation. Dibbler is an open-source utility that supports PD client and can be used to provide the required functionality.

OpenStack Networking with Neutron: What Plugin Should I Deploy?

(This is a summary version of a talk I gave at OpenStack Israel event on June 15th, 2015. Slides are available here)

Neutron is probably one of the most pluggable projects in OpenStack today. The theory is very simple and goes like this: Neutron is providing just an API layer and you have got to choose the backend implementation you want. But in reality, there are plenty of plugins (or drivers) to choose from and the plugin architecture is not always so clear.

The plugin is a critical piece of the deployment and directly affects the feature set you are going to get, as well as the scale, performance, high availability, and supported network topologies. In addition, different plugins offer different approaches for managing and operating the networks.

So what is a Neutron plugin?

The Neutron API exposed via the Neutron server is splitted into two buckets: the core (L2) API and the API extensions. While the core API consists only of the fundamental Neutron definitions (Network, Subnet, Port), the API extension is where the interesting stuff get to be defined, and where you can deal with constructs like L3 router, provider networks, or L4-L7 services such as FWaaS, LBaaS or VPNaaS.

In order to match this design, the plugin architecture is built out of a “core” plugin (which implements the core API) and one or more “service” plugins (to implement additional “advanced” services defined in the API extensions). To make things more interesting, these advanced network services can also be provided by the core plugin by implementing the relevant extensions.

What plugins are out there?

There are many plugins out there, each with its own approach. But when trying to categorize them, I found that usually it boils down to “software centric” plugins versus “hardware centric” plugins.

With the software centric ones, the assumption is that the network hardware is general-purpose, and the functionality is offered, as the name implies, with software only. This is where we get to see most of the overlay networking approaches with the virtual tunnel end-points (VTEP) implemented in the Compute/Hypervisor nodes. The requirements from the physical fabric is to provide only basic IP routing/switching. The plugin can use an SDN approach to provision the overly tunnels in an optimal manner, and handle broadcast, unknown unicast and multicast (BUM) traffic efficiently.

With the hardware centric ones, the assumption is that a dedicated network hardware is in place. This is where the traditional network vendors usually offer a combined software/hardware solution taking advantage of their network gear. The advantages of this design is better performance (if you offload certain network function to the hardware) and the promise of better manageability and control of the physical fabric.

And what is there by default?    

There are efforts in the Neutron community to completely separate the API (or control-plane components) from the plugin or actual implementation. The vision is to position Neutron as a platform, and not as any specific implementation. That being said, Neutron was really developed out of the Open vSwitch plugin, and some good amount of the upstream development today is still focused around that. Open vSwitch (with the OVS ML2 driver) is what you get by default, and this is by far the most common plugin deployed in production (see the recent user survey here). This solution is not perfect and has pros and cons like any other of the solutions out there.

While Open vSwitch is used on the Compute nodes to provide connectivity for VM instances, some of the key components with this solution are actually not related to Open vSwitch. L3 routing, DHCP, and other services are implemented using dedicated software agents using Linux tools such as network namespaces (ip netns), dnsmasq, or iptables.

So how one should choose a plugin?

I am sorry, but there is no easy answer here. From my experience, the best way is to develop a methodological approach:

1. Evaluate the default Open vSwitch based solution first. Even if you end up not choosing it for your production environment, it should at least get you familiar with the Neutron constructs, definitions and concepts

2. Get to know your business needs, and collect technical requirements. Some key questions to answer:

  • Are you building a greenfield deployment?
  • What level of interaction is expected with your existing network?
  • What type of applications are going to run in your cloud?
  • Is self-service required?
  • Who are the end-users?
  • What level of isolation and security is required?
  • What level of QoS is expected?
  • Are you building a multi cloud/multi data-center or an hybrid deployment?

3. Test things up yourself. Don’t rely on vendor presentations and other marketing materials