SOLUTION BRIEF Ericsson Cloud SDN with Netronome Agilio® Server Networking Platform Achieves Massive TCO Savings in Cloud Data Centers KEY BENEFITS Software-defined flexiblity and dynamic control More data delivered to more applications per server High-scale security with high performance Lower TCO Significant challenges arise when trying to implement Cloud SDN with high-scale security and performance purely with a software-based approach. Server-based networking is one of the key ingredients of cloud software-defined networking (SDN) data center deployments on Telco-grade infrastructure such as the Ericsson HDS 8000 Hyperscale Datacenter System. With Ericsson Cloud SDN, integration with OpenStack and Open Daylight provides a solid framework for implementing functions such as dynamic and seamless L3 VPN connectivity. At the same time, high levels of policy scalability are needed so that distributed cloud infrastructures can be automated and governed for security and compliance. However, significant challenges arise when trying to implement cloud SDN with high-scale security purely with a software-based approach. This is because these operations are extremely CPU intensive, and take away a great deal of server CPU resources that could otherwise be used to run IT or network functions virtualization (NFV) applications. There is a negative impact on total cost of ownership (TCO) as more servers are needed to run numerous applications, Netronome’s Agilio SmartNICs eliminate these problems and improve the TCO of data centers by providing an alternate way to implement these server-based networking functions much more efficiently, while at the same time not sacrificing any of the benefits. Agilio provides the ability to implement the server-based networking data path directly on the server adapter, offloading the server CPU cores from needing to perform this function. The acceleration and offload of server-based networking to Agilio SmartNICs is what allows the same amount of application work to be performed by far fewer server resources. This improvement in application work efficiency drives a dramatic improvement in overall TCO for both IT and NFV workloads. First, we will look at the IT use case. IT workloads typically are storage or compute bound, and therefore do not consume as much network data as NFV applications. Let us assume that for a given IT Cloud deployment, the average application running on a VM or container consumes 300Mbps at an average packet size of 200 Bytes (or about 200K packets/sec). If the server-based networking functions, in this case driven by Open vSwitch (OVS), are implemented in software using 4 Xeon CPU cores, there will only be enough data to feed up to 18 applications. In this case there is extra capacity in the 24-core Xeon server that is effectively stranded and underutilized. The implementation with Agilio, on the other hand, uses only 1 Xeon core to implement the server-based networking functions, and 23 Xeon cores are available for applications running on VMs or containers. Furthermore, since Agilio can process and deliver far more network data, it can run many more applications per SOLUTION BRIEF: Ericsson Cloud SDN with Netronome Agilio Server Networking Platform Ericsson Cloud SDN with OVS in User Space and Traditional NIC Rack Throughput: 72Mpps Applications per Rack: 360 Ericsson Cloud SDN with OVS Running on Netronome Agilio Platform Rack Throughput: 440Mpps Applications per Rack: 2200 Server Core Allocation TOR x86 Core x86 Core x86 Core OVS 4 cores x86 x86 NIC Server NIC Server Core Core x86 Core NIC Server x86 Core x86 Core x86 Core NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server x86 Core x86 Core x86 Core x86 Core x86 x86 Core x86 x86 Core Core or Core VMs containers x86 x86 20 cores Core Core x86 Core TOR x86 Core x86 Core x86 Core x86 Core Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server 18 applications at 300Mbps/200Kpps each NIC NIC NIC Server Server TOR TOR TOR TOR x86 x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 x86 x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core Core Core VMs or containers x86 x86 23 cores Core Core x86 Core x86 Core 110 applications at 300Mbps/200Kpps each Racks needed to support 2200 VNFs TOR Server Core Allocation OVS Core TOR TOR NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server Server Racks needed to support 2200 VNFs TOR 6X Server Server Server Server Server Server Server Server Server Server LOWER TCO Server Server Server Server Agilio Figure 1: IT Use Case server, per rack at the same performance level. The analysis in Figure 1 shows that it would take over 6 racks to run the same number of applications when using a traditional NIC and software OVS, compared to just one rack with Agilio performing OVS acceleration and offload. Now we will examine the NFV use case. NFV workloads typically are lighter on compute and storage, and are Ericsson Cloud SDN with OVS in User Space and Traditional NIC Rack Throughput: 168Mpps VNFs per Rack: 80 NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server Ericsson Cloud SDN with OVS Running on Netronome Agilio Platform Rack Throughput: 440Mpps VNFs per Rack: 220 Server Core Allocation TOR x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core VMs 8 cores x86 Core TOR x86 Core x86 Core OVS 16 cores x86 Core x86 Core optimized to process network data at higher speeds. Assume that for a given NFV deployment, the average application running on a VM or container consumes 3Gbps at an average packet size of 200 Bytes (which equates to about 2 Million packets/sec). For this workload, with software OVS, we would have to dedicate many more Xeon cores to get adequate data delivery to the NFV applications. In the example below, we dedicate 16 cores to OVS, about the x86 Core Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server Agilio Server 4 applications at 3Gbps/2Mpps each NIC NIC NIC Server Server Server Racks needed to support 220 VNFs TOR TOR TOR NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server NIC Server Server Core Allocation x86 OVS Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core x86 x86 x86 Core x86 Core x86 Core x86 Core x86 Core x86 Core Core Core VMs or containers x86 x86 23 cores Core Core x86 Core x86 Core 11 applications at 3Gbps/2Mpps each Racks needed to support 220 VNFs TOR Server Server Server Agilio Figure 2: NFV Use Case Server Server Server Server Server Server Server Server Server Server Server 3X LOWER TCO SOLUTION BRIEF: Ericsson Cloud SDN with Netronome Agilio™ Server Networking Platform maximum that can be reasonably allocated due to stability and scaling issues. Even with so many cores dedicated to OVS, only enough network data can be processed and delivered to run 4 NFV applications. The implementation with Agilio, on the other hand, is able to process and deliver enough data to run 11 NFV applications at the same performance level. The analysis in Figure 2 shows that it would take almost three racks to run the same number of NFV applications when using a traditional NIC and software OVS, compared to just one rack with Agilio performing OVS acceleration and offload. To explore further workload examples and data center configurations, and see how your TCO reduction that can be achieved with Agilio SmartNICs, try out our TCO calculator, which can be found at www.netronome.com/products/ovs/ roi-calculator. Summary This analysis confirms that Agilio SmartNICs can deliver more data to more applications running on cloud servers such as the Ericsson HDS 8000, which translates directly to improved server efficiency and a dramatic reduction in TCO. This analysis shows a TCO reduction of 6X can be achieved for typical IT workloads and 3X with NFV workloads. Data has also been collected for points in between these two workloads for operators that may have different or mixed requirements as shown in the following chart: TCO Savings Using Ericsson Cloud SDN and Netronome Agilio Platform 90 Percent (%) Savings 80 70 60 50 40 30 20 10 0 200 Kpps 400 Kpps 700 Kpps 1200 Kpps 2000 Kpps Per Applications Workload Netronome Systems, Inc. 2903 Bunker Hill Lane, Suite 150 Santa Clara, CA 95054 Tel: 408.496.0022 | Fax: 408.586.0002 www.netronome.com ©2017 Netronome. All rights reserved. Netronome, the Netronome logo, and Agilio are trademarks or registered trademarks of Netronome Systems, Inc. All other trademarks mentioned are registered trademarks or trademarks of their respective owners in the United States and other countries. SB-ERICSSON-CLOUDSDN-1/17
© Copyright 2026 Paperzz