Unlocking the potential of in-network computing for telecommunication workloads 

The next-generation hybrid cloud platform designed for communications service providers (CSP) is called Azure Operator Nexus. Azure Operator Nexus deploy Network Functions (NFs) in various network environments, including the cloud and the edge. These NFs are capable of performing a wide range of duties, from more basic ones like deep packet inspection and radio access networking and analytics to more complex ones like layer-4 load balancers, firewalls, Network Address Translations (NATs), and 5G user-plane functions (UPF). Performance and scalability of NFs are essential to ensuring uninterrupted network operations, given the massive volume of traffic and parallel flows they control.

Until recently, only two alternatives were available to network operators for adopting these essential NFs. Utilize stand-alone hardware middlebox appliances first and then implement them on a cluster of commodity CPU servers using network function virtualization (NFV). 

Each option’s performance, memory capacity, cost, and energy efficiency must be compared to its unique workloads and operating conditions, such as traffic rate and the number of concurrent flows that NF instances must be able to support, before choosing between them. 

Our investigation demonstrates that in terms of cost effectiveness, scalability, and flexibility, the CPU server-based strategy often outperforms proprietary middleboxes. As it can easily handle loads of less than hundreds of Gbps, this is a useful tactic to deploy when traffic volume is relatively low. However, when traffic volume increases, the method starts to fail, necessitating the use of more CPU cores specifically for network activities. 

 

In-network computing: A new paradigm

At Microsoft, we have been working on an innovative approach which has piqued the interest of both industry personnel and the academic world—namely, deploying NFs on programmable switches and network interface cards (NIC). This shift has been made possible by significant advancements in high-performance programmable network devices and the evolution of data plane programming languages such as Programming Protocol-Independent (P4) and Network Programming Language (NPL). For example, programmable switching Application-Specific Integrated Circuits (ASIC) offer a degree of data plane programmability while ensuring robust packet processing rates—up to tens of Tbps, or a few billion packets per second. Similarly, programmable Network Interface Cards (NIC), or “smart NICs,” equipped with Network Processing Units (NPU) or Field Programmable Gate Arrays (FPGA), present a similar opportunity. Essentially, these advancements turn the data planes of these devices into programmable platforms. 

In-network computing is a brand-new computing paradigm made possible by this technical advancement. This enables us to run several tasks directly on network data plane devices previously performed by CPU servers or proprietary hardware. Additionally, to NFs, this also includes elements from other distributed systems. With the aid of in-network computing, network engineers can implement different NFs on programmable switches or NICs, allowing for the cost-effective handling of large volumes of traffic (such as > 10 Tbps) with only one programmable switch as opposed to tens of servers. This eliminates the need to dedicate CPU cores solely to network functions. 

Current limitations on in-network computing

In-network computing has some exciting promise but has yet to fully materialize in useful cloud and edge deployments. Effectively managing the heavy demands from stateful applications on a programmable data plane device has proven to be the main issue. While the existing method is sufficient for running a single application with a defined number of tiny workloads, it greatly limits in-network computing’s wider potential. 

Due to a lack of resource elasticity, there is a significant gap between the changing needs of network operators and application developers and the current, relatively constrained vision of in-network computing. The model is under strain as more potential concurrent in-network applications and more traffic need to be processed. Currently, a single program can run on a single device with limited resources, such as tens of MB of SRAM on a programmable switch. When an application’s workload demands exceed the constrained resource capacity of a single device, the application cannot run since expanding these restrictions often requires expensive hardware upgrades. This restriction, in turn, makes it difficult to optimize and expand the use of in-network computing. 

Bringing resource elasticity to in-network computing

We have set out on a quest to enable resource elasticity in response to the fundamental problem of resource limitations with in-network computing. Our main attention is on in-switch applications, or those running on programmable switches, as they are currently constrained by the most stringent resource and capability constraints of any programmable data plane devices available today. We’re investigating a more practical alternative: an on-rack resource augmentation architecture, as opposed to suggesting hardware-intensive alternatives like improving switch ASICs or developing hyper-optimized applications. 

This model envisions a deployment combining a programmable switch with other data-plane components, such as smart NICs and software switches step-by-step way to increase a programmable network’s effective capacity to handle increasing workload needs. The limits of in-network computing can now be overcome with this method, which is both intriguing and practical. 

At the ACM SIGCOMM conference 2020, we introduced a new system design called the Table Extension design (TEA).1 TEA offers elastic memory in an inventive way by using a high-performance virtual memory abstraction. NFs with a huge state in tables, such as one million per-flow table entries, can now be handled by top-of-rack (ToR) programmable switches. These can require hundreds of megabytes of RAM, which switches normally do not have. The clever innovation behind TEA is its capacity to enable switches to access free DRAM on CPU servers housed in the same rack in a scalable and cost-effective manner. This is accomplished skillfully utilizing Remote Direct Memory Access (RDMA) technology, which only provides application developers with high-level Application Programming Interfaces (APIs) while hiding complexity. 

Our testing with different NFs shows that TEA can give low and predictable latency as well as scalable throughput for table lookups, all without ever using the servers’ CPUs. This ground-breaking design has attracted significant interest from experts in the academic and industrial worlds. It has been put to use in some scenarios, including network telemetry and 5G user-plane operations. 

The USENIX Symposium on Networked Systems Design and Implementation (NSDI) in April was where we debuted ExoPlane.2 ExoPlane is an operating system created especially to enhance the on-rack switch’s resource capacity to serve several concurrent applications. 

ExoPlane is built with a practical runtime operating model, state abstraction, and low performance and resource overheads to address the difficulty of effectively maintaining application states across numerous devices. The planner and the runtime environment are the two fundamental parts of the operating system. The planner accepts many programs that have been minimally or not modified for a switch and optimally distributes resources to each application depending on input from network operators and developers. Following that, the ExoPlane runtime environment executes workloads across the switch and external devices while effectively maintaining the state, balancing loads across devices, and handling device failures. Our analysis shows that ExoPlane offers quick failover, scalable throughput, and low latency while keeping a small resource footprint and necessitating little to no application adjustment.

Looking ahead: The future of in-network computing

As we explore the frontiers of in-network computing, we see a future rife with possibilities, exciting research directions, and new deployments in production environments. Our present efforts with TEA and ExoPlane have shown us what’s possible with on-rack resource augmentation and elastic in-network computing. We believe they can be a practical basis for enabling in-network computing for future applications, telecommunication workloads, and emerging data plane hardware. As always, the ever-evolving landscape of networked systems will continue to present new challenges and opportunities. At Microsoft, we aggressively investigate, invent, and light up such technology advancements through infrastructure enhancements. In-network computing frees up CPU cores, reducing cost, increased scale, and enhanced functionality that telecom operators can benefit from through our innovative products such as Azure Operator Nexus. 

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

Related articles

Contact us

Partner with us for comprehensive IT

We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.

Your benefits:
What happens next?
1

We Schedule a call at your convenience 

2

We do a discovery and consulting meeting 

3

We prepare a proposal 

Schedule a Free Consultation