Server Sent Events for Unidirectional Data Flows

Nuwan Chamara Pallewela
6 min readFeb 24, 2021

Server sent events(SSE) is a surprisingly unpopular concept among software developers when compared with the protocols like web-sockets, mechanisms like polling and long polling. Will it provide you an advantage over other concepts well known in the software developer community? Definitely Yes!!! Can it be used everywhere we have client-server communication? That is sadly a No. This post is written to highlight the concept and identify possible use-cases where SSE will outshine web-sockets and polling. Finally, a sample application implementation with coding examples will be discussed to help you to start experimenting with this straight away.

What is SSE?

This is a fairly new concept that came into HTML5 standard which enabled web clients to register with a data stream and receive updates continuously without polling. As the name suggests, Server-Sent Events is for sending updates from server to client. This one-way data flow behaviour restricts this on using with all use-cases supported by web-sockets which provides a full-duplex channel to communicate between client and server. Let’s consider the server to client data streaming use cases in general to discover real world scenarios and solutions.

Social media applications like Facebook, Instagram, Twitter can be seen as server sending data to the home screen to populate the latest events of your connections. Stock prices updates coming to a broker application which requires to display the latest market data. If we consider these two scenarios, a slight difference can be identified related to the latency requirement where latency is critical with stock price updates and it needs to be reduced as much as possible. SSE is capable to fulfil these kinds of scenarios efficiently with low development complexity.

Following are the possible implementation choices for this scenario.

  1. Polling
  2. Long Polling
  3. Web-Sockets
  4. SSE

Polling

Polling is the traditional well-known request and response relationship between the client and the server. The Client sends a request to the server for some interesting data and the server replies with a response.

This can be used to implement the above-discussed scenario with minimum development complexity of sending REST API request to the server and consume the response. But there are many issues with using this mechanism. The client might have to send requests very frequently to satisfy the latency requirement. It is just a matter of time that the server will get overwhelmed with the huge amount of requests coming from multiple clients. And most of these requests will respond same/empty responses if an update is not present compared to the previous response. Another concern would be the latency attached to the polling. The server is not capable of sending an update as it happens until the client sends a request. Along with these issues, header overhead and connection establishment also need to be considered as the number of requests exchanged between client and server are huge. For each request, an HTTP connection is required to be established(HTTP multiplexing can help to reduce the impact of this) and the full set of headers are need to be sent. So, it is highly recommended to avoid using polling for such use cases at all times.

Long Polling

Long polling introduces few advancements to avoid several issues of polling. Client request and server response behaviour is still present with a little trick. The server responds only when new data is available compared to the previous response. Otherwise, the server holds the request and waits until a change is observed on data.

This mechanism avoids the redundant API calls between the client and the server. But this involves queuing requests from clients which add some complexity to server implementation and performance impact when the load is high. If there are many clients queued for a particular data, it will not be pushed for clients at the same time when an update occurs. So, some latency concerns are there when the load is high. In conclusion, long polling has some improvements over polling, but still other issues are present as in polling.

Web Sockets

Web-socket is a protocol providing a full-duplex communication channel over a single TCP connection.

As the connection is preserved between the client and the server, no polling is required because the server can send updates as it occurs. All issues related to latency, bandwidth and queuing requests are not present anymore. But, there are a few new concerns that emerged as web-sockets not use HTTP. So, developers need to handle a lot of problems that are taken care of in HTTP. Therefore, this is a highly useful protocol in modern system implementation, but need to check whether the benefits out-weight the development complexity involves with it as well.

Server Sent Events

Here comes the SSE. In this way, the client receives updates automatically after the initial request via an HTTP connection.

No polling is required to get the latest data, minimum latency as in with web-sockets and minimum implementation overhead. This concept overcomes the main pain points of polling as well as the implementation overhead of web-sockets. So, if a full-duplex channel is not required for a solution, it is better to look into the possibility of integrating SSE rather than web-sockets.

How to Integrate SSE?

Let’s dig deep into implementing end to end scenario with SSE.

Message Format

A specific message format is defined for SSE with plain text content type. There are three fields named id, event and data. Each field is separated by a new line. A double line terminates the message.

This message format is required to be followed by the server implementation. All server responses need to be in this form.

Client-Side Implementation

The client connects to the server by declaring a new EventSource object. This EventSource API should be available on most modern browsers as it in HTML5 specification. Following are some popular browser versions supporting SSE.

EventSource object creation can be done with a target URL for the HTTP endpoint. Then onmessage(No!!! It is not a typo. ‘m’ is simple) and addEventListener can be implemented to process events coming from the server. That’s it!!!

Server-Side Implementation

Following are the key points that need to be considered when implementing server-side endpoints for SSE.

  1. Respond with one or more valid messages using the correct message format
  2. Following headers are required to be present
  • Content-Type : ‘text/event-stream’
  • Connection: ‘keep-alive’
  • Cache-Control: ‘no-cache’

Stopping an Event Stream

To stop the event stream, it is required to close it from the client-side. It is not possible to close it from the server-side. If the server became unreachable and lost the connection, EventSource will automatically retry for the connection.

Limitations of SSE

The main two limitations are it can not support binary data transfer as it needs to be in the specific format under text/event-stream. And the other concern is the maximum number of connections per browser, per destination is six. This is a hard limit by browser implementations. So, both of these need to be considered as well when designing solutions with SSE.

Conclusion

In conclusion, it is recommended to use server-sent events when a requirement of sending real-time continuous updates to a client is present and using web-socket is an overkill. Try to avoid polling, if you need real-time updates to the client from the server.

--

--

Nuwan Chamara Pallewela

Tech Enthusiastic | PhD Researcher in Artificial Intelligence at La Trobe University