Distbench – a flexible network benchmarking tool

Distbench a flexible network benchmarking tool | itkovian

Network performance is critical for distributed applications, and there are always efforts underway to improve the performance of data center network communication stacks. These efforts take a variety of approaches. For example, they may lead to new hardware or new software and may attempt to change computing nodes or the network structure itself. Examples include new Network APIs that bypass the operating system AND devices, prc layers, I/O threading models, congestion control protocols, fabric topologiesAND dynamic multipath routing policies.

Determining which of these innovations to adopt requires running distributed system benchmarks that simulate a variety of typical data center workloads and analyzing the results. These critical benchmarks are often developed in an ad hoc and piecemeal manner that does not result in a common reusable infrastructure. In short, there is a need for better benchmarks of open source distributed systems that enable the reuse of existing skills and infrastructure.


Distbench is a new open source benchmarking tool from Google intended to reduce the effort spent developing, executing and analyzing network benchmarks by both application developers and network technology developers.

Distbench is somewhat unique in that it can produce a multitude of different network traffic patterns. It does this by synthesizing a traffic model provided with a high-level description of a target application to emulate. It also supports sending traffic using multiple RPC protocols (currently gRPC, Mercury and Homa).

The primary design goals for Distbench are ease of use, speed of test execution, generality of simulated application behaviors, extensibility to new network APIs, and richness of analysis. In short, Distbench should, in principle, be able to run in any cluster or cloud environment, reproduce any traffic pattern, use any RPC layer, and quickly produce results in a format that allows investigators to perform any type of analysis of offline performance.


As shown in Figure 1, Distbench functions as a distributed system that synthesizes the traffic patterns of other distributed systems. It consists of a single controller node that manages multiple worker nodes, with each worker node typically located on a separate physical or virtual machine. Performance experiments are initiated by sending a traffic configuration message to the controller node via RPC. The controller node coordinates the worker nodes’ activities, assigning roles, introducing workers to their peers, and collecting performance data at the end of the experiment.

The controller node responds to RPC traffic configuration with a message containing the performance results that have been collected and the configuration that has been tested. Once one experiment ends, another can be started by the next RPC at any time, without having to restart or reconfigure tasks on individual nodes or interacting with any sort of cluster scheduling system. This workflow ensures that running a collection of performance experiments with Distbench is quick and easy, regardless of the underlying cluster management system, and provides many more benefits.

Figure 1: Illustration of the Distbench benchmark workflow. See for more information.


Distbench is designed to summarize network traffic from an abstract description of the interactions between multiple services, each of which can be replicated and distributed across multiple physical or virtual nodes. Traffic description includes details of logical services, physical location of services, RPC manager actions (for example, starting dependent RPCs), request and response payload sizes, fan-in/fan-out, and more. The current version of the network traffic model description focuses mainly on modeling network transfers, including proper handling of dependencies between them, but there is also support for simulating simple bottlenecks and network adversaries. CPU and memory bandwidth that was recently added.

The ultimate goal is for application developers to be able to publish Distbench-compliant traffic pattern configurations as an alternative to releasing and maintaining special versions of their binaries and datasets for benchmarking purposes. The community can then benefit from easy-to-run benchmarks with results that are more predictive of real-world application performance.


Using an abstract description of network traffic allows Distbench to synthesize that traffic across several underlying RPC layers. The actual RPC level used in an experiment is simply specified in its configuration, and a single Distbench binary can produce traffic using any supported RPC level. Distbench currently supports gRPC (with multiple threading models), Homa and Mercury in the open source version, with Google’s internal version adding support for Stubby and other experimental protocols.

Distbench supports multiple RPC layers through an abstraction called a protocol driver. Protocol drivers are easy to write, so the developer of a new RPC layer can access a library of traffic patterns in a few days or less of coding by adopting Distbench. The protocol driver abstraction is also designed to support sending traffic through network simulators, meaning that, in the future, Distbench may be able to help experiment with new network fabrics before they’re built.


The data formats used to store network benchmark results are often overlooked as an area for innovation, but in fact they are critical to the ability to detect and analyze performance anomalies.

The Distbench architecture makes it possible to compare N traffic patterns over K RPC layers with only O(N+K) development effort instead of the traditional O(N*K). New traffic models can be tested against multiple RPC implementations, and new RPC implementations can be tested against a traffic model library with minimal incremental effort. However, one challenge is that it is not feasible to code support for calculating every possible metric with such a broad range of traffic patterns and associated performance metrics. Distbench produces basic summary statistics on latency and bandwidth, which may be sufficient for experiments with simple traffic patterns, but any deeper analysis must be done offline, which in turn requires raw performance data to be stored in a compact lossless format.

Distbench records its raw performance data in a highly structured format designed to be both detail-rich and space-efficient. The design of this format could be its own white paper, but the highlights are that it can log timing data, client and server identity, bytes transferred, and pass/fail results for each individual RPC. It can also track the dependencies between the many RPCs that may arise in a given RPC tree. Distbench collects this data without requiring a dependency on an external RPC tracking service, and also allows you to periodically sample RPC tracking to save space and overhead.

Distbench’s result format can be converted to any format an existing analysis tool can accept, and can be adapted quite easily to existing benchmarks. Tools that natively accept the Distbench format have the advantage of being able to reference traffic configuration metadata, which can be used to automatically produce performance dashboards for arbitrary traffic patterns.


Distbench is currently capable of synthesizing a wide variety of traffic patterns built from traditional single-request, single-response RPCs and simulate simple CPU/memory resource consumption and antagonism. Work is underway to calibrate Distbench against existing benchmarks to ensure it produces consistent results, to add support for additional RPC protocols, and to add support for additional networking paradigms, including streaming RPC and RDMA. These efforts will add significant functionality to the tool, but even in its current state, Distbench has already shown significant promise within Google for analyzing cloud VM network performance, new RPC levels, threading models, and network offloading.

The Distbench source is available for download at this repository is open to contributions.

About the author: Daniel Manjarres is a senior software engineer at Google, where he focuses on data center network performance measurement and analysis tools. Outside of work he studies world history and searches for the perfect bass line.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author, and do not represent those of ACM SIGARCH or its parent organization, ACM.

Hi, I’m Samuel