Building a high-performance transport to remote desktops and applications from the cloud to various client form factors is a multi-dimensional problem, and we continue to innovate to deliver the best user experience. In this blog, we’ll do a deep-dive on how we have improved Blast Extreme’s transport with network intelligence (BENIT) to make it even better!
Blast Extreme Network Intelligent Transport
While the ubiquitous TCP excels well in LAN-like high bandwidth, low latency, low loss networks, it falls short in challenging network conditions with higher packet loss as it primarily relies on retransmissions to recover the losses as discussed here. Last year, we addressed the gap of TCP and delivered a brand-new transport – Blast Extreme Adaptive Transport, a UDP-based reliable transport protocol which excels in delivering interactive user applications under challenging network conditions such as low bandwidth, high latency and packet loss. However, the adaptive transport (built over UDP) alone cannot solve this multi-dimensional transport problem because:
- UDP traffic may be administratively blocked or throttled.
- Policy-based routing may route/shape UDP and TCP traffic distinctly onto different network paths.
- Some apps (e.g. file copy) care more about raw throughput than interactivity for which TCP is good enough.
- While the Blast Extreme adaptive transport matches the performance of TCP in LAN-like conditions on many platforms, it consumes slightly more CPU when compared to TCP and the innovations in TCP on different OS platforms (newer congestion algorithms, for instance) make BEAT always play the catch-up game for these LAN-like conditions.
Last year, we let end-users choose TCP or BEAT based on the network conditions they were connecting from. While this worked just fine for most users who weren’t on the move, we quickly learned that this choice is not straightforward for those on the move especially because network conditions may vary dynamically due to co-existing traffic, media interference, and middle boxes inducing jitter. And at times, it is simply a big inconvenience for the user to remember to select the right transport before using their favorite remote application. Thus, to improve our end user experiences, we decided to build a hybrid transport that would intelligently choose between BEAT and TCP dynamically based on the underlying network conditions.
Before we get to details of this hybrid transport, it is also useful to describe a related transport problem. A reliable transport (such as TCP or Adaptive Transport) provides reliability to applications only when the transport connection is alive. A network blip or switching of the network adapter (say from Cellular network to Wi-fi) may break the connection and disrupt the higher-level application. For a better user experience, when the session resumes, any file copies (via client drive redirection) should resume from where it left off.
It turns out, this problem of providing network continuity (across temporary network losses) and providing a hybrid transport for remoting desktops and applications are related and can be achieved via an introduction of a session layer protocol. With Horizon 7.5.0 and Horizon Clients 4.8.0, we introduced BENIT that can run on top of any combination of reliable transports, as a solution to these two related problems.
In order to provide network continuity across hybrid transports, the session layer protocol needs to provide the following characteristics:
- Reliability: Guarantee the sequential delivery of data across transports and/or temporary network failures (or network adapter switches)
- Transport Switching: Use the right combination of transport based on application and network feedback
- Load sharing and Load balancing: Can optionally drive both transports at the same time to balance the load across transports, or pin certain classes of traffic to a particular transport (not covered in this blog).
Figure 1: BENIT is session layer protocol (add-on) on top of TCP and BEAT. It plugs into the session protocol run by the Multiplexer.
The need for yet another reliability layer on top of reliable transports may seem surprising but becomes evident when we understand that the reliability guarantees of the underlying transports are limited to the data that traverse the transport under question. If data is sprinkled across transports, by a higher layer, we lose the reliability guarantee for the combined stream. This can be illustrated by the following scenarios.
- Network loss: In the case of a temporary network loss (or network adapter switch), the underlying transport connections fail. Without reliability guarantee at a higher layer, the data sender would not know if any of the data sent to the receiver on a given transport was lost in flight, or all the data was received by the receiver but not all acknowledgments were received by the sender. Without this knowledge, it is impossible for the sender to decide what data to (re-)send when the underlying transport(s) is/are rebuilt after network resumption.
- Racing data: Let us say that an application (like Drive Redirection) sent 2 messages (say, m1 and m2) in order and expects to receive them in the same order (like any traditional application built on a reliable transport). Once we sent m1 on a given transport, let us say that we would want to switch the transport based on application/network-based sensor feedback. Now, m2 may be put on a different transport, resulting in a race between m1 and m2 to the receiver. Without a reliability layer, the receiver cannot reliably reconstruct the stream as m1 followed by m2.
Now that we established the need for another reliability layer, the next obvious question is what is the network overhead of this reliability layer? Our implementation manages to limit its network overhead of reliability to less than 0.01% since it does not need to provide all the reliability guarantees like traditional reliable transport protocols. In particular, it doesn’t need to provide the below:
- Retransmission on timeouts: Retransmissions are done only across network losses/network adapter switches. The transport relies on the underlying transports to retransmit on timeouts of the individual transports.
- Error detection/correction: The underlying individual transports are expected to perform any data validation/ error detection and correction and doesn’t do any of them on its own.
Visibility to the underlying transport is needed to read the network-related sensors and use the feedback loop from the applications’ sensors to make an intelligent decision on the right combination of the transports to use. This decision can either be policy based or be auto-learned. The simplest form of switching is based on static policies. These static policies could be as simple as to switch to BEAT only if packet loss is > 1%, latency > 50ms and bandwidth is < 200Mbps. These static policies can be created based on empirical results of running representative workloads on BEAT and TCP. Once configured, BENIT will periodically sense the network conditions (based on the underlying transports’ estimates) and switch transports if needed. It should be noted that use of such a static policy in itself is a big win for an administrator who is managing many remote desktop farms across varied network conditions. Without this, the onus is on the end-users to guess the right transport manually which is error-prone especially when network conditions vary dynamically.
It isn’t difficult to imagine how this can be extended to auto-learned policies based on the application’s feedback. The key idea here is that based on different application sensors (FPS, A/V quality, perceived latency etc.), one can compute a “goodness index” actively using a scoring function. An appropriate transport can be chosen/dynamically learned based on this goodness index.
Irrespective of policy-based or auto-learned transport switching, it should be noted that the switching is done independently on the client and on the remote desktop. So, if the uplink network characteristics and downlink network characteristics are very different, the client could be using BEAT as transport and the remote desktop may be using TCP as transport; thus, providing maximum flexibility.
Transport Switching Analysis
Now that we understand how the triggers for transport switching can be made, it is also important to understand how to probe/sense for changing network conditions and how long it actually takes to switch from one transport to the other (aka Convergence time).
Active Vs Passive Probing: One of the approaches for transport switching is to actively probe the inactive transport by using synthetic traffic to understand the network conditions and the application responsiveness, before actually switching to that transport. The tradeoff here is wasted network bandwidth due to this synthetic traffic versus being more deterministic before switching. In order to limit the wasted network bandwidth, instead of performing active probing, BENIT builds a probabilistic trust model over time where we evaluate whether a switch was a good one or not. For example, if we made a series of bad switches from BEAT to TCP (and back), over time Blast Extreme will resist switching to TCP immediately even if the conditions to switch are met.
Convergence time: In order to minimize the network overhead, the reliability layer addresses the messages by message sequence numbers and not by byte sequence numbers. This has an implication that the transport switching can reliably be done only at message boundaries. Blast Extreme supports two different approaches with varying convergence time and goodput:
- Lazy switching: Even if a given ith message is only partially sent, start sending i+1th on new transport immediately while the rest of ith message is sent on the previous transport. To bound the convergence time, if all of ith message was not sent by a convergence time of t/2, the ith message may need to be retransmitted fully on the new transport after time t/2 (which is proportional to previous transport latency, throughput, and remaining message size).
- Rollback and switch: If a given ith message is only partially sent, rollback and resend the entire ith message on the new transport before sending for the next i+1th message. This achieves a near zero-convergence time at the cost of increased bytes on the wire.
I encourage you to checkout Blast Extreme in Horizon 7.5.0 and Horizon Clients 4.8.0 yourselves. The results are amazing and below I show how well the protocol automatically chooses the best transport and how it automatically resumes a remote session and any user operation across network blips/network adapter switches.
Hybrid Transport: We first illustrate how using BENIT provides the best of TCP and Adaptive Transport and eliminates the need for the end user to double guess the right transport to use. Table 1 shows Network Intelligence achieving the best FPS numbers on video playback. Table 2 shows Adaptive Transport achieving the best throughput on file copying over drive redirection.
Workload: Video Playback
|Network Profile||BENIT||TCP||Blast Extreme Adaptive Transport|
|10 Mbps, 20% Loss, 200 ms RTT||13.40 fps||00.65 fps||13.38 fps|
|10 Mbps, 10% Loss, 200 ms RTT||16.52 fps||02.25 fps||16.52 fps|
|100 Mbps, 0% Loss, 200 ms RTT||27.52 fps||27.53 fps||25.68 fps|
|200 Mbps, 0% Loss, 200 ms RTT||28.23 fps||28.25 fps||27.92 fps|
Table 1: BENIT chooses the right transport amongst TCP & BEAT to achieve maximum FPS.
Workload: File Copy Throughput (in Mbps) over Drive Redirection
|Network Profile||BENIT||TCP||Blast Extreme Adaptive Transport|
|10 Mbps, 1% Loss, 50 ms RTT||8.32 Mbps||2.38 Mbps||8.32 Mbps|
|100 Mbps, 0% Loss, 0 ms RTT||98.8 Mbps||98.8 Mbps||91.28 Mbps|
Table 2: Network Intelligence chooses the right transport amongst TCP and Adaptive Transport to achieve maximum throughput
Network Continuity: Finally, Figure 2 illustrates how Blast Extreme can withstand temporary network losses (lasting for up to a maximum of configurable time window – default 120 seconds) without disrupting higher level applications like a file copy across Drive Redirections.
NW Lost →
Figure 2: File transfer over Drive Redirection automatically resumes from where it left off after the network is restored.