A New Beginning

2019 brings a new beginning for me. After a little over five years at Red Hat, I have decided to move on to my next challenge. This is a big move, and I wanted to take a moment and reflect back on those years.

I joined Red Hat at 2013 as Product Manager for Red Hat Enterprise Virtualization (which since then was rebranded to Red Hat Virtualization, or RHV). That was my first PM job ever, and really a dream come true. I would always be grateful for my manager at the time, Andy Cathrow, for believing in me and offering me the job although I did not have PM experience before.

When I joined Red Hat we were around 4,500 employees worldwide. We were still primarily known for our enterprise Linux offering, and cloud was not yet a thing. On the personal front I was in a relationship with my soon-to-be my wife, but not married yet.

If I need to sum up my time at Red Hat with just one word, it would be “growth”. I was really fortunate to experience growth in many aspects: the company itself, my career, and my family lives.

Fast forward five years and Red Hat is a major player in hybrid cloud, with an impressive portfolio that goes way beyond Linux and touches application development, virtualization, cloud, storage, networking, management and automation. The company has more than 13,000 (!) employees as of February 2019, and was acquired by IBM in what is expected to be the third-biggest deal in the history of U.S tech. Seeing this growth from the inside was truly amazing. I have had the chance to work on cool projects, and bring new products to market.  

Career wise, I ended up working on three major products: RHV, OpenStack, and OpenShift – and due to the special culture of Red Hat, also working and interacting with many open source communities, including (but not limited to) Kubernetes, oVirt, Open vSwitch, KVM, OpenDaylight, Skydive, Kuryr, OPNFV, DPDK, and Ansible. I had the pleasure to work alongside and learn from the most amazing group of people and shape my product management skills and personality. I have also learned (the hard way) what it’s like to be working from home and in a super distributed team across Israel, Europe, and US East and West Coasts.

On the personal front, I am married with two beautiful children now. Integrating “work” and “life” and building the optimal schedule was also a key lesson learned from my time at Red Hat.

Next I am joining the Facebook Connectivity team to work on a new product. Facebook certainly feels different than what I did previously, although I am still going to be involved with networking. The Connectivity mission to bring more people online to a faster Internet is near and dear to my heart.

Moving on was not an easy decision. That said, I felt like this was the right thing for me and for my career. Among other things, I wanted to experience what it’s like to be a PM in a different company, and explore new markets and people problems.

Red Hat OpenStack Platform 13: five things you need to know about networking

A post I wrote for the Red Hat Stack blog, on key networking features included in Red Hat OpenStack Platform 13. Read more here: Red Hat OpenStack Platform 13: five things you need to know about networking.

Networking sessions in Red Hat Summit 2016

I recently attended the Red Hat Summit 2016 event that took place at San Francisco, CA, on June 27-30. Red Hat Summit is a great place to interact with customers, partners, and product leads, and learn about Red Hat and the company’s direction.

While Red Hat is still mostly known for its Enterprise Linux (RHEL) business, it also offers products and solutions in the cloud computing, virtualization, middleware, storage, and systems management spaces. And networking is really a key piece in all of these.

In this short post I wanted to highlight a few sessions which are relevant to networking and were presented during the event. While video recordings are not available, slide decks can be downloaded in a PDF format (links included below).

  • Software-defined networking (SDN) fundamentals for NFV, OpenStack, and containers
    • Session overview: With software-defined networking (SDN) gaining traction, administrators are faced with technologies that they need to integrate into their infrastructure. Red Hat Enterprise Linux offers a robust foundation for SDN implementations that are based on an open source standard technologies and designed for deploying containers, OpenStack, and network function virtualization (NFV). We’ll dissect the technology stack involved in SDN and introduce the latest Red Hat Enterprise Linux options designed to address the packet processing requirements of virtual network functions (VNFs), such as Open vSwitch (OVS), single root I/O virtualization (SR-IOV), PCI Passthrough, and DPDK accelerated OVS.
    • Slides

————————————————————————-

  • Use Linux on your whole rack with RDO and open networking
    • Session overview: OpenStack networking is never easy–each new release presents new challenges that are hard to keep up with. Come see how open networking using Linux can help simplify and standardize your RDO deployment. We will demonstrate spine/leaf topology basics, Layer-2 and Layer-3 trade-offs, and building your deployment in a virtual staging environment–all in Linux. Let us demystify your network.
    • Slides

————————————————————————-

  • Extending full stack automation to the physical network
    • Session overview: In this session, we’ll talk about the unique operational challenges facing organizations considering how to encompass the physical network infrastructure when implementing agile practices. We’ll focus on the technical and cultural challenges facing this transition, including how Ansible is uniquely architected to serve as the right foundational framework for powering this change. We’ll touch on why it’s more important than ever that organizations embrace the introduction of new automated orchestration capabilities and start moving away from traditional command and control network device administration being done hop by hop. You’ll see some some of the theories in action and touch on expanding configuration automation to include elements of state validation of configuration changes. Finally, we’ll touch on the changing role of network engineering and operations teams and why their expertise is needed now more than ever to lead this transition.
    • Slides

————————————————————————-

————————————————————————-

  • Telco breakout: Reliability, availability, and serviceability at cloud scale
    • Session overview: Many operators are faced with fierce market competition that is attracting their customers with personalized alternatives. Technologies, like SDN, NFV, and 5G, hold the key to adapting to the networks of the future. However, operators are also looking to ensure that they can continue to offer the service-level guarantees their customers expect.With the advent of cloud-based service infrastructures, building secure, fault-tolerant, and reliable networks that deliver five nines (99.999%) service availability in the same way they have done for years has become untenable. The goal of zero downtime is still the same, as every hour of it is costly to service providers and their customers. As we continually move to new levels of scale, service providers and their customers expect that infrastructure failures will occur and are pro-actively changing their development and operational strategies. This session will explore these industry challenges and how service providers are applying new technologies and approaches to achieve reliability, availability, and serviceability at cloud scale. Service providers and vendors will join us to share their views on this complex topic and explain how they are applying and balancing the use of open source innovations, resilient service and application software, automation, DevOps, service assurance, and analytics to add value for their customers and business partners.
    • Slides

————————————————————————-

  • Red Hat Enterprise Linux roadmap
    • Session overview: Red Hat Enterprise Linux is the premier Linux distribution, known for reliability, security, and performance. Red Hat Enterprise Linux is also the underpinning of Red Hat’s efforts in containers, virtualization, Internet of Things (IoT), network function virtualization (NFV), Red Hat Enterprise Linux OpenStack Platform, and more. Learn what’s new and emerging in this powerful operating system, and how new function and capability can help in your environment.
    • Slides

————————————————————————-

  • Repeatable, reliable OpenStack deployments: Pipe dream or reality?
    • Session overview: Deploying OpenStack is an involved, complicated, and error-prone process, especially when deploying a physical Highly Available (HA) cluster with other software and hardware components, like Ceph. Difficulties include everything from hardware selection to the actual deployment process. Dell and Red Hat have partnered together to produce a solution based on Red Hat Enterprise Linux OSP Director that streamlines the entire process of setting up an HA OpenStack cluster. This solution includes a jointly developed reference architecture that includes hardware, simplified Director installation and configuration, Ceph storage backed by multiple back ends including Dell SC and PS series storage arrays, and other enterprise features–such as VM instance HA and networking segregation flexibility. In this session, you’ll learn how this solution drastically simplifies standing up an OpenStack cloud.
    • Slides

————————————————————————-

  • Running a policy-based cloud with Cisco Application Centric Infrastructure, Red Hat OpenStack, and Project Contiv
    • Session overview:  Infrastructure managers are constantly asked to push the envelope in how they deliver cloud environments. In addition to speed, scale, and flexibility, they are increasingly focused on both security and operational management and visibility as adoption increases within their organizations. This presentation will look at ways Cisco and Red Hat are partnering together to deliver policy-based cloud solutions to address these growing challenges. We will discuss how we are collaborating in the open source community and building products to based on this collaboration. It will cover topics including:
      • Group-Based Policy for OpenStack
      • Cisco Application Centric Infrastructure (ACI) with Red Hat OpenStack
      • Project Contiv and its integration with Cisco ACI
    • Slides

 

 

NFV and Open Networking with RHEL OpenStack Platfrom

(This is a summary version of a talk I gave at Intel Israel Telecom and NFV event on December 2nd, 2015. Slides are available here)

I was honored to be invited to speak on a local Intel event about Red Hat and what we are doing in the NFV space. I only had 30 minutes, so I tried to provide a high level overview of our offering, covering some main points:

  • Upstream first approach and why we believe it is a fundamental piece in the NFV journey; this is not a marketing pitch but really how we deliver our entire product portfolio
  • NFV and OpenStack; I touched on the fact that many service providers are asking for OpenStack-based solutions, and that OpenStack is the de-facto choice for VIM. That said, there are some limitations today (both cultural and technical) with OpenStack and clearly we have a way to go to make it a better engine for the telco needs
  • Full open source approach to NFV; it’s not just OpenStack but also other key projects such as QEMU/KVM, Open vSwitch, DPDK, libvirt, and the underlying Linux operating system. It’s hard to coordinate across these different communities, but this is what we are trying to do, with active participants on all of those
  • Red Hat product focus and alignment with OPNFV
  • Main use-cases we see in the market (atomic VNFs, vCPE, vEPC) with a design example of vPGW using SR-IOV
  • What telco and NFV specific features were introduced in RHEL OpenStack Platform 7 (Kilo) and what is planned for OpenStack Platform 8 (Liberty); as a VIM provider we want to offer our customers and the Network Equipment Providers (NEPs) maximum flexibility for packet processing options with PCI Passthrough, SR-IOV, Open vSwitch and DPDK-accelerated Open vSwitch based solutions.

Thanks to Intel Israel for a very interesting and well-organized event!

 

LLDP traffic and Linux bridges

In my previous post I described my Cumulus VX lab environment which is based on Fedora and KVM. One of the first things I noticed after bringing up the setup is that although I have got L3 connectivity between the emulated Cumulus switches, I can’t get LLDP to operate properly between the devices.

For example, a basic ICMP ping between the directly connected interfaces of leaf1 and spine3 is successful, but no LLDP neighbor shows up:

cumulus@leaf1$ ping 13.0.0.3
PING 13.0.0.3 (13.0.0.3) 56(84) bytes of data.
64 bytes from 13.0.0.3: icmp_req=1 ttl=64 time=0.210 ms
64 bytes from 13.0.0.3: icmp_req=2 ttl=64 time=0.660 ms
64 bytes from 13.0.0.3: icmp_req=3 ttl=64 time=0.635 ms
cumulus@leaf1$ lldpcli show neighbors 

LLDP neighbors:
-------------------------------------

Reading through the Cumulus Networks documentation, I discovered that LLDP is turn on by default on all active interfaces. It is possible to tweak things, such as timers, but the basic neighbor discovery functionality should be there by default.

Looking at the output from lldpcli show statistics I also discovered that LLDP messages are being sent out of the interfaces, but never received:

cumulus@leaf1$ lldpcli show statistics 

Interface:    eth0
  Transmitted:  11
  Received:     0
  Discarded:    0
  Unrecognized: 0
  Ageout:       0
  Inserted:     0
  Deleted:      0

Interface:    swp1
  Transmitted:  11
  Received:     0
  Discarded:    0
  Unrecognized: 0
  Ageout:       0
  Inserted:     0
  Deleted:      0

Interface:    swp2
  Transmitted:  11
  Received:     0
  Discarded:    0
  Unrecognized: 0
  Ageout:       0
  Inserted:     0
  Deleted:      0

So what’s going on?

Remember that leaf1 and spine3 are not really directly connected. They are bridged together using a Linux bridge device.

This is where I discovered that by design, Linux bridges silently drop LLDP messages (sent to the LLDP_Multicast address 01-80-C2-00-00-0E) and other control frames in the 01-80-C2-00-00-xx range.

Explanation to that can be found in the 802.1AB standard which is stating that “the destination address shall be 01-80-C2-00-00-0E. This address is within the range reserved by IEEE Std 802.1D-2004 for protocols constrained to an individual LAN, and ensures that the LLDPDU will not be forward by MAC Bridges that conform to IEEE Std 802.1D-2004.”

It is possible to change this behavior on a per bridge basis, though, by using:

# echo 16384 > /sys/class/net/<bridge_name>/bridge/group_fwd_mask

Retesting with leaf1 and spine3

# echo 16384 > /sys/class/net/virbr1/bridge/group_fwd_mask
cumulus@leaf1$ lldpcli show neighbor
LLDP neighbors:

Interface:    swp1, via: LLDP, RID: 1, Time: 0 day, 00:00:02  
  Chassis:     
    ChassisID:    mac 00:00:00:00:00:33
    SysName:      spine3
    SysDescr:     Cumulus Linux version 2.5.5 running on  QEMU Standard PC (i440FX + PIIX, 1996)
    MgmtIP:       3.3.3.3
    Capability:   Bridge, off
    Capability:   Router, on
  Port:        
    PortID:       ifname swp1
    PortDescr:    swp1
cumulus@leaf1$ lldpcli show statistics 

Interface:      eth0
  Transmitted:  117
  Received:     0
  Discarded:    0
  Unrecognized: 0
  Ageout:       0
  Inserted:     0
  Deleted:      0

Interface:      swp1
  Transmitted:  117
  Received:     72
  Discarded:    0
  Unrecognized: 0
  Ageout:       0
  Inserted:     1
  Deleted:      0

Interface:      swp2
  Transmitted:  117
  Received:     0
  Discarded:    0
  Unrecognized: 0
  Ageout:       0
  Inserted:     0
  Deleted:      0


LLDP now operates as expected between leaf1 and spine3. Remember that this is a per bridge setting, so in order to get this fixed across the entire setup, the command needs to be issued for the rest of the bridges (virbr2, virbr3, virbr4) as well.