Event Hubs ingestion performance and throughput

metal-pipes-plumbing-372796Azure Event Hubs is a data streaming platform as a Service.  It is an ingestion service.

We’ve looked at Event Hubs as the ingestion end of Azure Stream Analytics in two recent articles (here & here).

Here we look at client side performance of different techniques and scenarios.  For instance, we’ll consider both AMQP & HTTP protocols.

By measuring the outcome of those scenarios & techniques, we can then make recommendations.


Here are some recommendations in the light of the performance and throughput results:

Let’s now look at the tests we did to come up with those recommendations.

How to run tests ourselves

daylight-gray-hand-994164We performed tests which are repeatable.

The code is in GitHub and can be deployed by the click of the following button:

This deploys an Event Hub namespace with many Azure Container Instances.  Container instances are the clients performing different scenarios.

The Event Hub namespace is set to maximum capacity (20).  This is to remove constraints on the service side.  This can be expensive in the long run, so we recommend to lower it after the test have run.

Each Container Instance has 1 CPU and 1 Gb of RAM.

The throughput test stresses the capacity of the Event Hub.  For that reason the containers are configured to run in sequence instead of in parallel.

The container instances all run the same container image.  That image is available of Docker Hub here.  The code for the image executable also is on GitHub as well as its docker file and artefacts.

The code uses environment variable to run different scenarios.  The ScenarioBase class represents a scenario and derived classes implement different scenarios.


automobile-fast-number-248747Let’s start with the performance scenarios.

Here we are interested in how fast it takes to send one or many messages.  We are not concerned about throughput but on latency.

We looked at three scenarios:

  1. Isolated:  here we create a brand new Event Hub connection, send an event and close the connection.
  2. Batch (1 by 1):  here we send many events, one by one, using that same connection.
  3. Batch:  here we send a batch of events at once.

For the two batch scenarios we used batch of 50 events.

We did that both for AMQP & HTTP REST API.  The AMQP API is implemented by Event Hub SDK (also available in Java).  HTTP isn’t.  We implemented it following the specs of both single-event and batch specs.  The implementation is done in HttpEventHubClient class.  We used the IEventHubClient interface abstraction to simplify testing both AMQP & REST.

We always send the same event payload.  The payload is small:  < 1kb in JSON.  In C#:

    Name = "John Smith",
    Age = 42,
    Address = new
        Street = "Baker",
        StreetNumber = "221B"
    Skills = new[]
    CreatedAt = DateTime.UtcNow.ToString("o")

The corresponding container instances for those scenarios are:

To look at the result, we can simply select an Azure Container Instance group:


Then select the containers of that group.


There is only one for each group.  We can select the logs tab.  This shows us the console output.

At the top of the logs we see the environment variables that were passed to the container.  We can also see those in the properties tab.


Performance Results

Here is a summary of the results.

AMQP (in seconds) HTTP / REST (in seconds)
Isolated (first) 1.55 0.29
Isolated (sustained) 0.24 0.10
Batch (one by one) for 50 events 1.76 (0.035 per event) 1.20 (0.024 per event)
Batch 0.26 (0.005 per event) 0.12 (0.002 per event)

We split the first scenario in two.  We can see that the first event a process runs pays an overhead tax.  We didn’t dive in for details.  It is likely Just In Time (JIT) compilation is happening.  Maybe some internal objects are initialized once.  The second row shows the average of time excluding the first event.

A few observations:

We’ll look at the recommendations after we look at the throughput tests.


black-and-white-busy-cameras-735795Let’s look at throughput.

Speed and throughput are sometimes confused.  They are related.  For instance, a very slow service won’t be able to achieve high throughput.

Throughput measures the number of events we can send to event hub per second.  This will dictate our ability to scale our solutions based on Event Hubs.

We did look at four (4) scenarios for throughput:

  1. Isolated:  again we open a new connection, send one event and close the connection.
  2. Isolated with pooling:  here we use a connection pool so that we do not create a new connection for each event.  We do not use a singleton connection single we are multithreaded.
  3. Safe Batch Buffer:  here we buffer events together to send them in batch.  We do that in a “safe way” in the sense that we do not returned until the event has been sent as part of an event.
  4. Unsafe Batch Buffer:  similar to safe batch buffer.  Here we return immediately and send events in batch shortly after.  Under certain circumstances we could lose events without caller knowing about it.  Hence the unsafe in the scenario’s name.

We used 100 threads sending events.  We sampled for 2 minutes.  For the two buffer batch scenarios, we cap the batch size at 50 events.

We implemented a very basic connection pooling class in EventHubConnectionPool.  We use it for pooling scenario.

The batch buffer scenarios implements a more sophisticated algorithm.  The main idea is that when we push an event, we wait a short while (0.1 second) before actually sending it.  During that time, other events are accumulated and sent in one batch.  The trade off is that the more we wait the bigger batches we sent but higher the latency will be.  This algorithm is implemented in a safe manner in SafeBufferBatchEventHubClient and in the unsafe manner in UnsafeBufferBatchEventHubClient.

Corresponding container instances for those scenarios are:

Throughput Results

Here is a summary of results.

AMQP (# events / second) HTTP / REST (# events / second)
Isolated N / A 67
Isolated with pool 3311 420
Safe Batch Buffer 809 854
Unsafe Batch Buffer 7044 7988

The isolated scenario in AMQP actually fails with connection timeout.  We didn’t bother exploring why.  It likely is in the same order of magnitude as its HTTP peer, which is the worst in the table.

AMQP protocol shines in a streaming scenario.  When we reuse connections and keep sending events, i.e. scenario Isolated with pool.  It out performs HTTP / REST by an order of magnitude.

Batching is obviously efficient if we look at the Unsafe Batch Buffer scenario.  It isn’t trivial to implement in a “safe” manner with random events.  We need to wait for events to come in and then they all need to wait for interaction with Event Hubs.  It therefore makes sense to use the batch API when events naturally comes in batch within the application.  Trying to batch them a posteriori yields worse throughput than sending them one by one.

We also got cues during the testing that batching was much more efficient in terms of resources.  It makes sense as there are much less networking overhead.


We looked at performance and throughput of different usage patterns of Event Hubs.  This was done from the client perspective to push events to Event Hubs.

Obviously, achieving a certain throughput level requires capacity on the Event Hubs namespace.  Capacity dictates the amount of Mb/s we can push to the hub before getting throttled.  This can easily get monitored.

Leave a comment