+32 476 522 867 Follow us on LinkedIn Follow us on Instagram

Server-Sent Events With

Werner Donné
Written by Werner Donné
Software Architect at Lemonade

At Lemonade we build regular statefull business applications with an event driven architecture. At the core of this sit Event Sourcing and CQRS. The UI is always a reactive web application, which sends commands to aggregates. A command may have an effect on an aggregate instance. A published event reflects the change. Any other component can listen and do something meaningful with it.

We wanted to extend this one-way event driven data flow to the UI. This is how we ended up using Server-Sent Events. When the UI sends a command to an aggregate instance it doesn’t wait for a response. The service stores the command in a durable way. It then replies with the status code 202 (Accepted). The effect of a command goes back to the UI in the form of an event. The UI reacts to that in the same way as it would for the internal actions. The data store would get updated and some part of the UI would be re-rendered.

Issues with Server-Sent events

We run our software in the cloud on Amazon Web Services and that makes using SSE not straightforward. The UI needs a long running connection with some endpoint. Our API service provides it. But this service isn’t exposed to the Internet. Requests come in via CloudFront. They are then directed to a load balancer, which then contacts the API service. You have no control over the timeout behavior of these services. In fact, at any moment a connection interrupt may occur. This problem is not specific to the cloud. Many on premises set-ups have a similar structure.

Providing many concurrent long running connections is a technical challenge. A standard Linux box could probably handle the scale of our applications well. But for very large scale operations this is a very specialized topic. The blog by Michai Rotaru shows that it is not trivial.


Because of those issues we looked for a specialized player in the market. We ended up with rather quickly. With this cloud service we could create a set-up in no time that works in all situations. You only need to provide two endpoints. First, you have the general SSE endpoint in your API service. You don’t provide the SSE connection there. Instead, you redirect the browser to the fanout URL that comes with your account. In that account you specify your second endpoint, which should set up the fanout channel. Fanout will use it before completing the SSE connection with the browser. Any service in your system can now send an event to the client through the proper fanout channel.

We use the username to set up the channels. The SSE endpoint requires authentication. In the redirect to fanout we set the encrypted username as a URL parameter. Fanout adds this URL parameter when it calls the SSE set-up endpoint. In there we decrypt the username and create a fanout channel with the same name.

The commands and events carry the JSON Web Token of the original request. The standard “sub” field always contains the username. Our microservices communicate only through Kafka, which is a distributed publish/subscribe system. They receive commands via specific topics. They emit events via yet other specific ones. Certain Kafka topics have a fanout connector attached to them. So any microservice involved in the flow can notify the user. It simply places a message on the right Kafka topic.

The Complete Flow

The whole thing is set up in the following steps, which are also shown in the sequence diagram below:

  1. The browser connects to the SSE endpoint of the API.
  2. The API redirects the browser.
  3. The browser connects to the fanout SSE endpoint, which is part of your fanout account.
  4. fanout contacts your SSE set-up endpoint. It adds the encrypted username from the redirect URL. This endpoint is also part of your fanout account.
  5. The API creates a channel at fanout and gives it the name of the user.
  6. fanout completes the connection from step 3.
  7. The browser sends some command to the API.
  8. The API relays the command to a Kafka command topic.
  9. A microservice listens to that Kafka topic and executes the command.
  10. This may have an effect on the aggregate instance. The microservice publishes the effect as an event on a Kafka reply topic.
  11. The fanout Kafka connector sends the event to the proper channel using the username in the event.
  12. fanout sends the event to the browser over the SSE connection.

Server Sent Events

SSE sequence diagram

Doing it With AWS Lambda

You don’t have to integrate the handling of SSE connections in your API service. You can also use AWS Lambda. In that case you would need three lambdas. The first handles the initial SSE connection and does the redirect to fanout. The second is the set-up endpoint, which creates the fanout channel. Finally, the third lambda listens to messages and sends them to the proper fanout channel.

The AWS Simple Notification Service makes this very easy. The third lambda can subscribe to it. Microservices would then place their events on SNS.


Server-Sent Events enable an event driven architecture for the client. The fanout service makes this a lot easier and more robust. It handles the durable connection business for you. Since it is outside of your own network it is suitable in many situations.

You can decouple your microservices completely from the front. The solution is centralizing the SSE service. All your applications then use the same SSE endpoint.

Written by Werner Donné
Software architect at Lemonade

Maybe you are also interested in...

Software Development Case Studies

Technical writing tips and tricks

Software Development Case Studies

Docker - A quick guide through docker’s amazing world for devs

Software Development Case Studies

Function as a Service (FaaS) in action with Fission


Your passionate advanced software development team