2 min read

Evolution of Switching Architectures in Cloud Networking

Evolution of Switching Architectures in Cloud Networking

As we wrap up this year, I do sense that the architectural shift in the datacenter environments has begun to change traditional constructs. As I said at the beginning of the year in January 2009, two tiered cloud transformation was one of my new-year resolutions and I predicted that it is more achievable than my weight-loss resolution. Well, I suppose I can safely confirm that statement now!

Networks designed in the late 90s addressed static applications and email was the "killer" application. Today's data patterns and application access is dynamic and demands new switching approaches. The future of cloud networking is predicated on a balanced trade-off of many variables; guaranteed performance, low latency and the application traffic patterns. I can see two emerging switching architectures for cloud designs. Cut through switching for low latency high performance compute cluster (HPC) applications and elegant store and forward switching with expanded buffers and queuing for next generation data center applications.

Low Latency Compute Clusters:

Modern applications such as market data feeds for high frequency trading in financial services demand ultra low latency using cut-through switching techniques that is sub-microseconds. The advantages of this 10GbE architecture is best latency with minimal buffers at a port level to guarantee real time traversal of information across data feeds, handlers, clients, algorithmic trading application and network. This is an ideal architecture for "leaf" and low latency designs with symmetric traffic patterns. It assumes that network is less loaded or therefore not congested.

Next Generation Data Centers:

For heavily loaded networks, such as "spine" applications, or seamless access of storage, compute and application resources enabling scale-out across networks with uniform performance is key. Web applications demanding movement of large numbers of storage, map-reduce clusters, search or database queries are the key target. Uniformity of performance is a mandatory metric and slightly higher latencies are acceptable. Increased buffers from 100-200Kbytes/port to many megabytes/port combined with well designed store-and-forward switching is the optimal answer to manage and prevent congestion.

Introducing the Arista 7048 1/10GE switch

Our introduction of the Arista 7048 is an example of this architecture. With a combination of both 1GE and 10GE ports, the extended memory of 768 megabytes guarantees equal amounts of predictable access between all nodes. In such asymmetric cases, cycles cannot be wasted while an application waits for data to show up or indirectly by waiting for another application process. The 7048 supports up to 4 10GE uplinks to can create active-active non-blocking designs. The Arista 7048 is also the first 1RU switch to seamlessly integrate L4-L7 features based on Citrix VPX technology.

Real World Profile

Take an important and familiar social networking application Facebook. Their public information shows that they have constructed that cloud with 800+ servers per cluster generating 50-100 million requests accessing 28 terabytes of memory. Instead of a traditional database retrieval that would take five milliseconds of access, Facebook deploys a "memcache" architecture where they were able to reduce the access time to half a millisecond for rapid access. Reduced retransmit delay and increased persistent connections also improves performance. The benefit of large buffers with guaranteed access should be key consideration.

The definition of cloud networking continues to be both an evolving and important era in IT. Unlike the static and well-defined enterprise world of the past decades, the nature of resources and applications are more dynamic depending greatly on compute needs, workloads and user access. Enterprise oversubscribed networks of the past were predicated on well-defined email(1MB) transfers whereas new cloud designs in the data center require low latency and consistent performance based on application patterns.

I want to take this opportunity to thank Arista's fans and customers for your steadfast support to us. We wish you a happy holiday season. As always I welcome your comments at feedback@arista.com

References:
Cloud Networking Designs
Arista-20 Years of Growth and Innovation

Arista-20 Years of Growth and Innovation

Today marks the 20th anniversary of Arista! Over that time, our company has grown from nothing to #1 in Data Center Ethernet, a highly profitable...

Read More
Meta and Arista Build AI at Scale

Meta and Arista Build AI at Scale

We are excited to share that Meta has deployed the Arista 7700R4 Distributed Etherlink Switch (DES) for its latest Ethernet-based AI cluster. It's...

Read More
CloudVision: The First Decade

CloudVision: The First Decade

As I think about the evolution of the CloudVisionⓇ platform over the last 10 years, and our latest announcement today, I’m reminded of three...

Read More