No matter how close in time, the future of streaming is in the cloud. Still, we live on earth (at least for most of us) and the content is acquired and produced on the ground. Hence, there is a need for a reliable way of transmitting data from the ground to the cloud. Despite the several more or less recent protocols to transport a live stream from one point to another, cloud ingest is still usually associated to three stereotypes: it’s unsecure, unreliable, and expensive.
Let’s see why none of these is true in 2019!
Why do I need any protocol at all?
Compressed video streams are severely disrupted by packet loss, because of the very high amount of temporal prediction in video compression standards. At typical primary distribution bitrate of 10 mbps, one lost packet in a million leads to a visually unacceptable disruption every 15mn …
But when you are born with HTTP (and TCP), it may be difficult to figure out why a protocol is needed to reliably transport a live stream over IP from one point to another. TCP comes with built-in error correction (by reordering and/or re-transmitting missing packets) and the first thought is that it can be used for that matter. And indeed, it has been used for a while as a cost-effective (and not-so-dirty) option to ingest streams in the cloud. RTMP, and push HLS have been extensively deployed and are still used today in production. That being said, the way TCP handles congestion is not meant to sustain a live stream and it’s very hard to control how the system behaves in case of bad network condition (for instance, a TCP connection can be stuck during tenth of seconds).
Cloud ingest is reliable
Because TCP could not be properly controlled, several vendors started implementing some sort of “live-streaming oriented” control protocol on top of UDP. UDP, as opposed to TCP, is stateless: it does not store “somewhere” a memory of what happened in the past and it is not a connected protocol. Because it doesn’t store such information, we say that UDP is “unreliable”: unless it comes with an additional protocol built on top of it (such as RTP), there is no way for the receiver to know that a packet was lost. RTP, defined in ST2022–1/2, makes a good job in recovering lost packets. But it comes with an overhead both in terms of bitrate and latency, and may not correct all the losses.
Several proprietary implementations (Zixi, VideoFlow, …) started to emerge in the early 2010s. They basically all share the same characteristics: based on UDP, they add a control protocol, relying on a “client-server” mechanism, that allows the client (the one receiving the stream) to ask for re-transmission of the packets that might have been recently lost. The general operation of such protocols is:
- The receiver does not send any request to the emitter unless it needs to request missing packets,
- The emitter is able to re-send one or several packets upon receiver request,
- The receiver has a window (which duration must be at least one round-trip delay, or several if need be) during which the re-emitted packets can be stored.
Such a protocol comes with a latency penalty (as the receiver keeps a window for receiving lost packets) but unless the link is completely broken, it can recover virtually any kind of loss.
Cloud ingest is cost-effective
Relying on proprietary, non open implementations have certainly limited adoption of such techniques: both ends (the emitter and the receiver) need to implement the same protocol and have commercial agreement with its provider. Moreover, the price and the business model of these proprietary protocols may not always have been aligned with the customer expectations.
There are at least two alternatives, based on the exact same concepts, that recently emerged:
- SRT(that stands for Secure Reliable Transport) was first shown at IBC 2013, then Open-Sourced in 2017. It is an implementation of such protocols, proposed by Haivision. Since then, it has been widely adopted by the industry and the SRT alliance claims hundreds of members. The library is well documented, easy to integrate and comes with a great level of testing. It also offers several modes of connection so that only one public IP is needed.
- RIST, proposed by the Video Services Forum, is a standard (not an implementation), relying on the exact same concepts. Being a standard, it may offer more guarantees in terms of independent testing than SRT, but it does not yet cover all the feature set of the latter: it does not support encryption and its “connection” method requires 2 public IP address (one on both ends), where SRT only needs one.
Another aspect of cloud ingest is the price of delivering packets into the cloud. Keep it simple: you won’t pay a single dollar for delivering data into the cloud. Ingest is always free, for all the cloud providers. Of course, you will be charged for the compute resources used for processing these data, or for the egress of the processed data, but ingest is (and will remain) totally free. A free ingest is strategic for the cloud providers as it attracts people in their cloud. It’s worth noting that some cloud providers (such as AWS) offer “Zixi as a service” to reliably ingest data in their cloud using Public Internet. This service is of course charged.
Cloud ingest is secure
Having sensitive data in the cloud is said to be insecure, because there is a feeling that the data you post in the cloud is publicly accessible. But it’s always the same story: the security breach does not come from the cloud, it comes from the users using weak passwords or unintentionally leaving backdoors open.
There are plenty of ways for securing the content when sending it in the cloud. SRT comes with a fairly well described algorithm which is similar to DVB-CA and that uses AES128 as the core encryption. You can also use VPN on top of it. At the end of the day, when properly configured, it is probably safer to ingest data in the cloud than to use BISS protected satellite feeds.
The cloud is (finally!) now
Ingesting a live stream in the cloud is cost-effective, secure, and reliable. The emitter (that we called “Ground Gateway”) that we have built at Quortex relies on SRT and comes with ETSI 101290 analysis, seamless stream redundancy, extensive alarms and remux capabilities. It has been built to enable a smooth transition to the cloud while providing broadcast reliability standards. Our SRT receiver is part of the Quortex microservice workflow; You can use it with our gateway or with any third party SRT server.