Subscribe to Blog Notification Emails

Latest Blog Post

Evolution of Switching Architectures in Cloud Networking

Jayshree Ullal
by Jayshree Ullal on Dec 3, 2009 7:09:13 AM

As we wrap up this year, I do sense that the architectural shift in the datacenter environments has begun to change traditional constructs. As I said at the beginning of the year in January 2009, two tiered cloud transformation was one of my new-year resolutions and I predicted that it is more achievable than my weight-loss resolution. Well, I suppose I can safely confirm that statement now!

Networks designed in the late 90s addressed static applications and email was the "killer" application. Today's data patterns and application access is dynamic and demands new switching approaches. The future of cloud networking is predicated on a balanced trade-off of many variables; guaranteed performance, low latency and the application traffic patterns. I can see two emerging switching architectures for cloud designs. Cut through switching for low latency high performance compute cluster (HPC) applications and elegant store and forward switching with expanded buffers and queuing for next generation data center applications.

Low Latency Compute Clusters:

Modern applications such as market data feeds for high frequency trading in financial services demand ultra low latency using cut-through switching techniques that is sub-microseconds. The advantages of this 10GbE architecture is best latency with minimal buffers at a port level to guarantee real time traversal of information across data feeds, handlers, clients, algorithmic trading application and network. This is an ideal architecture for "leaf" and low latency designs with symmetric traffic patterns. It assumes that network is less loaded or therefore not congested.

Next Generation Data Centers:

For heavily loaded networks, such as "spine" applications, or seamless access of storage, compute and application resources enabling scale-out across networks with uniform performance is key. Web applications demanding movement of large numbers of storage, map-reduce clusters, search or database queries are the key target. Uniformity of performance is a mandatory metric and slightly higher latencies are acceptable. Increased buffers from 100-200Kbytes/port to many megabytes/port combined with well designed store-and-forward switching is the optimal answer to manage and prevent congestion.

Introducing the Arista 7048 1/10GE switch

Our introduction of the Arista 7048 is an example of this architecture. With a combination of both 1GE and 10GE ports, the extended memory of 768 megabytes guarantees equal amounts of predictable access between all nodes. In such asymmetric cases, cycles cannot be wasted while an application waits for data to show up or indirectly by waiting for another application process. The 7048 supports up to 4 10GE uplinks to can create active-active non-blocking designs. The Arista 7048 is also the first 1RU switch to seamlessly integrate L4-L7 features based on Citrix VPX technology.

Real World Profile

Take an important and familiar social networking application Facebook. Their public information shows that they have constructed that cloud with 800+ servers per cluster generating 50-100 million requests accessing 28 terabytes of memory. Instead of a traditional database retrieval that would take five milliseconds of access, Facebook deploys a "memcache" architecture where they were able to reduce the access time to half a millisecond for rapid access. Reduced retransmit delay and increased persistent connections also improves performance. The benefit of large buffers with guaranteed access should be key consideration.

The definition of cloud networking continues to be both an evolving and important era in IT. Unlike the static and well-defined enterprise world of the past decades, the nature of resources and applications are more dynamic depending greatly on compute needs, workloads and user access. Enterprise oversubscribed networks of the past were predicated on well-defined email(1MB) transfers whereas new cloud designs in the data center require low latency and consistent performance based on application patterns.

I want to take this opportunity to thank Arista's fans and customers for your steadfast support to us. We wish you a happy holiday season. As always I welcome your comments at

Cloud Networking Designs

Opinions expressed here are the personal opinions of the original authors, not of Arista Networks. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Arista Networks or any other party.
Jayshree Ullal
Written by Jayshree Ullal
As CEO and Chairperson of Arista, Jayshree Ullal is responsible for Arista's business and thought leadership in AI and cloud networking. She led the company to a historic and successful IPO in June 2014 from zero to a multibillion-dollar business. Formerly Jayshree was Senior Vice President at Cisco, responsible for a $10B business in datacenter, switching and services. With more than 40 years of networking experience, she is the recipient of numerous awards including E&Y's "Entrepreneur of the Year" in 2015, Barron's "World's Best CEOs" in 2018 and one of Fortune's "Top 20 Business persons" in 2019. Jayshree holds a B.S. in Engineering (Electrical) and an M.S. degree in engineering management. She is a recipient of the SFSU and SCU Distinguished Alumni Awards in 2013 and 2016.

Related posts

The New Era of AI Centers

In 1984, Sun was famous for declaring, “The Network is the Computer.” Forty years later we are seeing this cycle come true...

Jayshree Ullal
By Jayshree Ullal - May 29, 2024
The Era of Microperimeters

Paradigm Shift to Zero Trust Networking

The new age of edge, multi-cloud, multi-device collaboration for hybrid work has given...

Jayshree Ullal
By Jayshree Ullal - April 30, 2024
The New AI Era: Networking for AI and AI for Networking*

As we all recover from NVIDIA’s exhilarating GTC 2024 in San Jose last week, AI state-of-the-art news seems fast and furious....

Jayshree Ullal
By Jayshree Ullal - March 25, 2024