Subscribe to Blog Notification Emails

Latest Blog Post

Evolution of Switching Architectures in Cloud Networking

Jayshree Ullal
by Jayshree Ullal on Dec 3, 2009 7:09:13 AM

As we wrap up this year, I do sense that the architectural shift in the datacenter environments has begun to change traditional constructs. As I said at the beginning of the year in January 2009, two tiered cloud transformation was one of my new-year resolutions and I predicted that it is more achievable than my weight-loss resolution. Well, I suppose I can safely confirm that statement now!

Networks designed in the late 90s addressed static applications and email was the "killer" application. Today's data patterns and application access is dynamic and demands new switching approaches. The future of cloud networking is predicated on a balanced trade-off of many variables; guaranteed performance, low latency and the application traffic patterns. I can see two emerging switching architectures for cloud designs. Cut through switching for low latency high performance compute cluster (HPC) applications and elegant store and forward switching with expanded buffers and queuing for next generation data center applications.

Low Latency Compute Clusters:

Modern applications such as market data feeds for high frequency trading in financial services demand ultra low latency using cut-through switching techniques that is sub-microseconds. The advantages of this 10GbE architecture is best latency with minimal buffers at a port level to guarantee real time traversal of information across data feeds, handlers, clients, algorithmic trading application and network. This is an ideal architecture for "leaf" and low latency designs with symmetric traffic patterns. It assumes that network is less loaded or therefore not congested.

Next Generation Data Centers:

For heavily loaded networks, such as "spine" applications, or seamless access of storage, compute and application resources enabling scale-out across networks with uniform performance is key. Web applications demanding movement of large numbers of storage, map-reduce clusters, search or database queries are the key target. Uniformity of performance is a mandatory metric and slightly higher latencies are acceptable. Increased buffers from 100-200Kbytes/port to many megabytes/port combined with well designed store-and-forward switching is the optimal answer to manage and prevent congestion.

Introducing the Arista 7048 1/10GE switch

Our introduction of the Arista 7048 is an example of this architecture. With a combination of both 1GE and 10GE ports, the extended memory of 768 megabytes guarantees equal amounts of predictable access between all nodes. In such asymmetric cases, cycles cannot be wasted while an application waits for data to show up or indirectly by waiting for another application process. The 7048 supports up to 4 10GE uplinks to can create active-active non-blocking designs. The Arista 7048 is also the first 1RU switch to seamlessly integrate L4-L7 features based on Citrix VPX technology.

Real World Profile

Take an important and familiar social networking application Facebook. Their public information shows that they have constructed that cloud with 800+ servers per cluster generating 50-100 million requests accessing 28 terabytes of memory. Instead of a traditional database retrieval that would take five milliseconds of access, Facebook deploys a "memcache" architecture where they were able to reduce the access time to half a millisecond for rapid access. Reduced retransmit delay and increased persistent connections also improves performance. The benefit of large buffers with guaranteed access should be key consideration.

The definition of cloud networking continues to be both an evolving and important era in IT. Unlike the static and well-defined enterprise world of the past decades, the nature of resources and applications are more dynamic depending greatly on compute needs, workloads and user access. Enterprise oversubscribed networks of the past were predicated on well-defined email(1MB) transfers whereas new cloud designs in the data center require low latency and consistent performance based on application patterns.

I want to take this opportunity to thank Arista's fans and customers for your steadfast support to us. We wish you a happy holiday season. As always I welcome your comments at feedback@arista.com

References:
Cloud Networking Designs

Opinions expressed here are the personal opinions of the original authors, not of Arista Networks. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Arista Networks or any other party.
Jayshree Ullal
Written by Jayshree Ullal
Jayshree Ullal is a networking executive veteran with 30+ years of experience. In 2018 Barron's named her one of the "World's Best CEOs." In 2015 she was co-awarded "EY 2015 Entrepreneur of the Year" across National USA and "#3 IT Industry Disrupter" by CRN. In 2005, she was named one of the "50 Most Powerful People" by Network World and one of the "Top Executives" by Forbes magazine 2012. As President and CEO for a decade, Jayshree led Arista Networks to a successful IPO in June 2014 at NYSE. She is responsible for building a multibillion dollar business in cloud networking and has forged strategic alliances with Microsoft, HP and VMware to name a few.

Related posts

Tech Time is Real Time

Silicon Valley is both an addiction and passion where entrepreneurs seek the realm of the impossible. Real-time language...

Jayshree Ullal
By Jayshree Ullal - December 11, 2018
Cognitive Campus - Next Frontier

Arista’s focus on disruption, with datacenters and routing, transforming siloed places in the network to seamless Places In the...

Jayshree Ullal
By Jayshree Ullal - October 16, 2018
Reflections on the Cloud Networking Decade

When I joined Arista ten years ago, we were in the midst of developing a novel purpose-built software architecture from a clean...

Jayshree Ullal
By Jayshree Ullal - September 17, 2018