Search This Blog

Friday, May 17, 2013

Performance analysis of network tunnels in SDN cloud network

Additional network overlay is the foundation and building block for most modern cloud network architectures today. In practice it means that before we can even think how to architect and build network for the cloud we need to build a solid and reliable multi-tiered IP network topology to interconnect our hypervisors servers. Of course this is a big simplification and there are many vendors that provide hardware support for cloud network (SDN enabled network). Examples are Brocade VCS/MLX or Cisco Nexus platform/Random thoughts about Cisco nexus product line.

But what is important is that in essence what we are going to built will be a typical multi-tier network with access, distribution and core layers like this example below:


Once the network is built there is now time for the cloud network element to be added. This is again very simplistic view to avoid all the technical details.

The industry is still working to established a common ground and consensus how a cloud network should look like and what services it should provide but in practice (base on a few companies like Nicira or Midokura) it is tightly associated with  Software defined networking (SDN) concept and architecture. And the common practice today is to implement SDN network as an additional network overlay on top of IP fabric infrastructure.

Like every network, cloud network needs to provide IP connectivity for cloud resources (cloud servers for example). Often to achieve this al hypervisors are inter-connected using tunneling protocols. This model allow us to decouple the cloud network from the physical one and allow more flexibility. That way all VMs traffic is going to be routed within the tunnels. To solve the cloud network problem is to find a solution how to route between the hypervisors using the tunnels.

Data flows, connections and tunnels are manged by cloud controller (a distributed server cluster) that need to be deployed in our existing physical network. An example of Nicira NVP controller can be found here: Network Virtualization: a next generation modular platform for the data center virtual network.

As we agreed VMs data will be routed within tunnels. These are the most popular ones: NVGRE, STT and VXLAN (Introduction into tunneling protocols when deploying cloud network for your cloud infrastructure).

As tunnels require additional resources there is an open question what overhead, resource consumption and performance implication will they represent. This post: The Overhead of Software Tunneling (*), do a comparison and tries to shed some more light on the topic.

ThroughputRecv side cpuSend side cpu
Linux Bridge:9.3 Gbps85%75%
OVS Bridge:9.4 Gbps82%70%
OVS-STT:9.5 Gbps70%70%
OVS-GRE:2.3 Gbps75%97%

This next table shows the aggregate throughput of two hypervisors with 4 VMs each.

ThroughputCPU
OVS Bridge:18.4 Gbps150%
OVS-STT:18.5 Gbps120%
OVS-GRE:2.3 Gbps150%

We can see that not all tunnels are completely transparent when it comes to performance. The GRE tunnel shows a significant degradation in throughput. The TCP based STT tunnel works fine although  For a complete analysis, explanation and further discussion I recommend to read the blog above (*).

References
  1. http://rtomaszewski.blogspot.co.uk/2012/12/what-do-you-need-to-implement-virtual.html
  2. http://rtomaszewski.blogspot.co.uk/2012/11/what-network-topologies-can-i-build.html
  3. http://rtomaszewski.blogspot.co.uk/2012/11/after-server-virtualization-there-is.html
  4. http://rtomaszewski.blogspot.co.uk/2013/05/sdn-system-software-architecture.html
  5. http://bradhedlund.com/2012/10/06/mind-blowing-l2-l4-network-virtualization-by-midokura-midonet/
  6. http://blog.ioshints.info/2012/08/midokuras-midonet-layer-2-4-virtual.html

No comments:

Post a Comment