Terminal Quality of Service (QoS) Settings


Frame runs your applications on powerful servers in the cloud and streams screen updates down to your browser using a H.264-based video streaming protocol. H.264 is a proven and flexible technology used in everything from HDTV broadcasts, digital cinema applications, Blu-ray players, digital video recorders, to CCTV and video surveillance systems. It’s the same protocol that Netflix, YouTube, Apple, and many others use to stream movies and TV shows from their data centers to your TVs, PCs, and phones. It’s well suited to delivering services across long distances.

H.264 offers a lot of flexibility over the ways it can process images. By default, Frame’s implementation is configured to provide a pragmatic balance between image quality and bandwidth that works well for delivering both rapidly changing video content (e.g., teleconferencing services and computer gaming) and high-resolution graphics apps like CAD packages. While this default setting works well for most applications and in most situations, there are some circumstances where manual tuning of the protocol’s Quality of Service (QoS) characteristics can improve user experience. The Frame Terminal has an optional advanced control panel, where QoS settings can be individually adjusted by users to optimize their experience. QoS settings can be used either to improve overall display quality or to prioritize one performance characteristic ahead of the others when bandwidth is limited.


QoS Settings Defined

Video Encoding Presets

Frame supports multiple custom presets optimised for specific use cases. Both variable bit rate (VBR) and constant bit rate (CBR) presets are available. Variable bit rate presets are better for displaying a mix of static and dynamic content (e.g., static display of a 3D model combined with rotating/translating parts); while constant bit rate presets are better for displaying dynamic content (movie-like experiences, video conferencing, video games, etc.). There is also a low bit rate preset which minimizes bandwidth used – useful for network connections with very limited available bandwidth.

Available Video Presets

  • Auto: Default video preset
  • vbrlow: Variable Bit Rate - Low Quality
  • vbrhigh: Variable Bit Rate - High Quality
  • cbrlow: Constant Bit Rate - Low Quality
  • cbrhigh: Constant Bit Rate - High Quality
  • lowbitrate: Low Bit Rate - Low Quality

Max Frame Rate

Frame automatically adjusts the video frame rate in response to application activity and available bandwidth. Under normal circumstances, the default frame rate is 20 frames per second (fps). Limiting the maximum frame rate can reduce bandwidth requirements, but may cause choppiness and can make interactive editing tasks difficult. Administrators can set the maximum frame rate for production sessions from the Dashboard. If enabled by the admin, end users can adjust the frame rate of their session as they see fit.

Supported Ranges

  • GPU-enabled instances: 5 - 60 fps
  • Frame Air instances: 5 - 30 fps

Max Video Bit Rate

Frame limits the maximum video stream bit rate to 16,000 kbps. Lowering the bit rate limits the overall bandwidth available to Frame, reducing both frame rate and image quality.

Supported Range

  • 256 kbps - 16,000 kbps


Setting maximum video bit rate to a value lower than 6,000 kbps will also affect maximum frame rate and maximum audio bit rate. For example, if maximum video bit rate is set to 2,000 kbps, frame rate will be automatically limited to 14 fps, and audio bitrate will be limited to 128 kbps.

Max Audio Bit Rate

By limiting the maximum audio bit rate, it is possible to reduce the bandwidth available for audio independently of any settings used by the audio source to reduce overall bandwidth requirements.

Supported Range

  • 0 - 160 kbps (0 disables audio channel)

Scale Video

Changing the video scale resizes the virtual desktop/app session to reduce the overall bandwidth requirements. Scale Video can reduce the session size by as much as 50%. Setting Video Scale to 0.5 (50%) would scale down a full screen session running at 1024×768 to 512x384. At this resolution, the resized display occupies one quarter of the amount of space of the original and requires approximately one quarter of the original bandwidth to transmit. When the Frame terminal receives the image, it is then rendered at its original size but at a lower resolution. You get a similar effect if you try watching a YouTube video with video quality set to 360P. While the resultant image is blurred, it may still be acceptable, depending on the content being viewed. Work requiring high image fidelity such as editing documents and spreadsheets is unlikely to benefit from this approach, but it may be suitable when participating in a video-conference where the highest video quality is not required.

Supported Range

  • 0.5 - 1 (50% - 100% of original session size)


Max Video Quantization

The video quantization setting controls the way that the Frame protocol encodes the video stream. Quantization can be thought of as a measure of degree to which a video stream can be compressed for a given image quality. Quantization is determined dynamically by the H.264 encoder based on both the available bandwidth and the display content. How this is done is beyond the scope of this article, but the output is readily understandable. Complex content (such as a high resolution CAD drawing) is assigned a low quantization factor and only lightly compressed to maintain quality, while simpler ‘low information’ content will receive a higher quantization value and so be compressed to a greater degree.

The max video quantization control sets the amount of compression that the encoder must use across a range from 48 (heavy compression, lower image quality) to 24 (light compression, better image quality), with a default setting of 42. Increasing the quantization factor from 42 towards 48 will force the encoder to compress all content more aggressively, reducing the amount of bandwidth required but with a risk of increasing visible compression artifacts in complex images. Decreasing the quantization factor towards 24 will permit the encoder to use less compression and so achieve higher image quality, but only if there is bandwidth available to transmit the video stream. Setting a low quantization factor will not improve image quality unless there is sufficient bandwidth available for the encoder to take advantage of it.


Setting quantization to a low value (24-28) may make network latency effects more pronounced.

Best Video Quality

Frame’s H.264 implementation uses YUV 4:2:0 chroma subsampling to encode images. This takes advantage of the human eye’s inability to recognize color differences to the same degree that it can recognize variations in brightness. By sending less information about color than it does about brightness, it is possible to reduce the amount of bandwidth required substantially without significantly compromising image quality. This does an excellent job of reducing the amount of bandwidth required, but in some situations, especially in apps where regions of strongly contrasting colors are displayed next to each other, chroma subsampling can result in colors “bleeding” into each other with undesirable results.

To support our customers who need absolute color fidelity, Frame also provides support for YUV 4:4:4 encoding. This turns off chroma subsampling, sending the full depth of color information for every pixel. As it sends more color information, there is a corresponding increase in required bandwidth. YUV 4:4:4 encoding is enabled by selecting “Best video quality”. The Frame Terminal session sizing icon (Magnifying Glass Icon) changes color to indicate when “Best video quality” is in use as follows:

  • Green: The session is in Best Video Quality mode
  • Orange: Best Video Quality (YUV 4:4:4) mode has been selected but the session is unable to provide it due to bandwidth limitations
  • White: Best Video Quality mode has not been selected (YUV 4:2:0)


The “Best video quality” toggle is only available when using Chrome browser.


With Greyscale enabled, color information is not transmitted (this is YUV 4:0:0 encoding), enabling a substantial reduction in bandwidth.

As Greyscale does not send any color information and “Best video quality” sends full color information, these two settings are mutually exclusive.


Unconstrained Networks

In environments where network bandwidth is effectively unconstrained, it is possible to adjust Frame QoS settings to enhance user experience in situations where improved image quality can be advantageous.

Picture Editing/Proofing

  • Enable “Best Video Quality”’ to ensure highest color fidelity.
  • Decrease “‘Video Quantization” to minimize image artifacts.

Spreadsheets, CAD Packages, and Similar Applications

  • Enable “Best Video Quality”’ to prevent ‘color bleed’.
  • Decrease “‘Video Quantization” to minimize to minimize image artifacts.

Bandwidth-constrained Networks


  • Prioritize FPS for the best overall experience.
  • Consider reducing “Video Scale” or increasing “Max Quantization” to reduce bandwidth requirements to allow an increase in the number of sessions.

Video Conferencing and Webinars

Bandwidth requirements for VoIP and audio/video conferencing services vary greatly. Codec choice, audio quality, display resolution and aspect ratio, and the complexity of the images’ activity levels all play a significant role in determining bandwidth requirements. Individual settings can usually be adjusted to reduce overall bandwidth requirements. However, not all systems offer sufficient control to develop acceptable performance in low-bandwidth settings. Further reduction in bandwidth requirements can be made by adjusting the Frame QoS if necessary.

  • Consider reducing “Video Scale” as a simple means of reducing the overall bandwidth required without the need to adjust more advanced QoS options.

Additional adjustments to reduce video bandwidth requirements include:

  • Increasing “Video Quantization” which will reduce bandwidth requirements at a cost of increased ‘blockiness’ in the video stream.
  • Reducing “Max Video bit rate” which can reduce bandwidth requirements at the expense of overall video quality and frame rate. By reducing “Max Video bit rate” it is possible to limit the overall bandwidth consumed by a Frame session and so ensure additional capacity is reserved for other purposes.