Software Defined Networking - "The Biggest Thing Since Ethernet"

SDN Journal

Subscribe to SDN Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get SDN Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


SDN Authors: Yeshim Deniz, Patrick Hubbard, Elizabeth White, Amitabh Sinha, Mike Wood

Related Topics: Cloud Computing, Virtualization Magazine, SDN Journal

Blog Post

Software Defined Networking | Part 2 By @MJannery | @CloudExpo [#SDN #Cloud]

SDN technologies are broadly split into two fundamentally different paradigms - "overlay" SDN and "underlay" SDN

In our initial part in this blog series on SDN, I gave a quick background overview.  This part of the series will cover overlay SDN and underlay SDN.

SDN technologies are broadly split into two fundamentally different paradigms - "overlay" SDN and "underlay" SDN.  With overlay SDN the SDN is implemented on top of an existing physical network.  With underlay SDN, the fabric of the underlying network is reconfigured to provide the paths required to provide the inter-endpoint SDN connectivity.

Overlay SDN (e.g., VMware NSX and Contrail) use tunneling technologies such as VXLAN, STT and GRE to create endpoints within the hypervisor's virtual switches and rely on the existing network fabric to transport the encapsulated packets to the relevant endpoints using existing routing and switching protocols.  One advantage of using encapsulation is that only the tunneling protocol end-point IP addresses (TPEP IPs) are visible in the core network - the IP addresses of the intercommunicating VMs are not exposed (of course the downside of this is that without specific VXLAN awareness, traffic sniffers, flow analyzers, etc. can only report on TPEP IP-IP conversations and not inter-VM flows).  Another advantage of encapsulated overlay networks is that there is no need for tenant segregation within the core (e.g. using MPLS VPNs, 802.1q VLANs, VRFs, etc.) as segregation is implicitly enforced by the tunneling protocol and the TPEPs.

One of the major drawbacks with overlay SDN (such as NSX) is that there is little, if any, network awareness - i.e. it cannot control,  influence or see how traffic flows through the network from one TPEP to another.  This has serious implications for traffic engineering, fault isolation, load distribution, security, etc.  Proponents of overlay SDN often assert that since datacenter network fabric is invariably highly resilient and significantly over-provisioned this is not a significant issue.  The argument is less convincing when heading out of the datacenter into the campus and across the WAN.

Underlay SDN (Openflow, Cisco ACI, QFabric, FabricPath, etc.) directly manipulate network component forwarding tables to create specific paths through the network - i.e. they intrinsically embed the end-to-end network paths within the network fabric.  The SDN controller is responsible for directly manipulating network element configuration to ensure that the requirements presented at the controller's northbound API are correctly orchestrated.  With intimate knowledge of network topology, configured paths through the fabric and link-level metrics (e.g. bandwidth, latency, cost), much more efficient utilization of network infrastructure can be achieved using more complex route packing algorithms - e.g., sub-optimal routing.  Another advantage of underlay SDN is that the controller dictates exactly where in the network each traffic flow traverses which is invaluable for troubleshooting, impact analysis and security.

The industry is currently split between network architects preferring overlay networks to those preferring underlay networks.  It is not a decision to be taken lightly as it has far-reaching implications on complexity, troubleshooting, monitoring, SLA compliance, performance management, RCA and cost.

The next installment in this series will cover whether it's ideal to have an all virtual environment or if you need some physical hardware.

More Stories By Michael Jannery

Michael Jannery is CEO of Entuity. He is responsible for setting the overall corporate strategy, vision, and direction for the company. He brings more than 30 years of experience to Entuity with 25 years in executive management.

Prior to Entuity, he was Vice President of Marketing for Proficiency, where he established the company as the thought, technology, and market leader in a new product lifecycle management (PLM) sub-market. Earlier, Michael held VP of Marketing positions at Gradient Technologies, where he established them as a market leader in the Internet security sector, and Cayenne Software, a leader in the software and database modeling market. He began his career in engineering.