Arista-20 Years of Growth and Innovation
Today marks the 20th anniversary of Arista! Over that time, our company has grown from nothing to #1 in Data Center Ethernet, a highly profitable...
Cloud Networking has arrived upon us. Take server density on a scale never seen before plus compute demanding application workloads and you get a class of non-blocking cloud networking that is more aware of dynamic application flows than vanilla packet switching.
A key design criterion here is Latency. Of course there are differing views on latency. Some may say it does not matter since you always run into the low speed WAN/internet or speed-of-light limit. Others say that, with build-outs of private or public cloud islands as a complement to the Internet, new application intensive workloads traversing cloud environments are demanding closer attention to latency. Both are right actually. It is really about different strokes for different folks in different parts of the networking segments.
Let’s first review the culprits and contributors of latency. End-to-End Latency is typically contributed by three factors in total application level latency:
Server Latency: The delay due to server-based hosts adapters, operating system, stacks and user space to transfer data from application buffers to cloud network
Network Latency: The delay due to processing L2 or L3 packet headers and switching the network payload through the network mesh from source to destination.
Resource Latency: Typically a storage or database resource contributing to delays due to transfer of data from application to cloud network
I recall the raging arguments in the prior decade about implementation of Ethernet switches and the discussions on "cut through" versus "store and forward". The issues centered on the deploying older store & forward techniques for preservation of packet integrity versus latency improvements with cut-through. Latency was less of an issue back then since application flows were kilobyte class (with email and file transfers) and network speeds were only 100 megabits transitioning to 1 gigabit Ethernet. Typical network latencies tolerated here were 100 to 1000 microseconds and most classic enterprise networks are still built that way with many hops across three tiers of oversubscribed networks.
Clearly with Cloud Networking, one must anticipate wild matrix flows with overlapping peaks and valleys and move these flows without dropping in uniform manner. To deliver uniformity, Cloud Networking must be non-blocking, with low and predictable latency for all traffic types. Typical cloud networking latency demands are 1-10 microseconds with some clusters even demanding nanoseconds.
The new generation of 1Gb/10Gb Ethernet switches from Arista is designed with these key goals in mind. They deploy the best cut-through techniques intelligently. The cut-through architecture delays the packet transfer only enough to look up the Layer 2/3 headers and determine destination port. Additionally, a key parameter is optimal buffer capacity that ensures predictable latencies.
Perhaps one of the most compelling and proven cases for low latency is algorithmic trading. The market data industry is able to adjust and input data near instantaneously, vying each other for a competitive edge on order transactions measured in fractions of microseconds! The Financial Information Exchange (FIX) protocol is widely used for real time information transactions and requires profiling protocols (such as Precision or Network Time Protocol, PTP/NTP) for latency measurements. Measuring and sampling latency delays accurately using Arista’s Extensible Operating System (EOS) and third party tools for optimal synchronization in microseconds is also becoming key factor.
In addition to Financial services, there are a growing number of Cloud Networking use cases that need low latency architectures such as high performance computing, web and database clusters, storage access, seismic analysis, large scale data analytics and virtualization. At Arista Networks we have pioneered the worlds best combination of 1Gb/10Gb Ethernet port density, footprint and latency with our 7xxx series of switches.
Welcome to the new ways of Cloud Networking! For those of you who will be at Interop, do visit us at Arista Networks Booth #1651.
As always I invite your views at feedback@arista.com.
Today marks the 20th anniversary of Arista! Over that time, our company has grown from nothing to #1 in Data Center Ethernet, a highly profitable...
We are excited to share that Meta has deployed the Arista 7700R4 Distributed Etherlink Switch (DES) for its latest Ethernet-based AI cluster. It's...
As I think about the evolution of the CloudVisionⓇ platform over the last 10 years, and our latest announcement today, I’m reminded of three...