IP Video Can Give Providers the Jitters

0

Shaking Off the Problems With Measurement Solutions —

One of the more interesting aspects of the video industry’s transition to using IP-based networks is the collision of the 2 worlds of video engineering and networking engineering.

Video over IP involves removing proven SDI-based broadcast equipment and replacing it with commercial off-the-shelf (COTS) computing and networking platforms from the IT industry. It’s a collision indeed, especially when you consider that SDI has delivered solid reliability for more than 20 years and is well understood by broadcast engineers. No wonder not everyone is happy about having to make such a move.

But the flexibility and economic benefits gained from tapping into the massive IT ecosystem are impossible to ignore. Those benefits, plus the need for more bandwidth to handle Ultra HD/4K streams (not to mention High Dynamic Rage (HDR), Wide Color Gamut (WCG) and, in the future, 8K), make the move to higher bandwidth IP infrastructures all but inevitable.

This means that the video and IT sides of the business must do more than just talk about working together — they actually have to work together. It’s easy to say that the burden falls on video engineers to meet new technical and skills challenges, and indeed there will be a major learning curve. But the IT side faces a big learning curve as well.

IP brings with it technical challenges, including jitter, latency, the risk of dropped packets, and an inherent lack of synchronicity along with asymmetry, which results in different path delays upstream and downstream. Also, IP is a complex set of bi-directional protocols requiring a knowledge of both the source and destination before deployment.

The biggest difference however is that in most data center applications, lost data can be re-sent — and this is not the case with high bitrate video. The challenge for the network engineer is in understanding video technology and its impact on IT infrastructure. It is clear that there’s a need for diagnostic monitoring and analysis tools that are usable by both video engineers and network engineers.

Problems? What Problems?
A lot of the issues that can cause problems in IP networks can be traced back to packet jitter. Excessive packet jitter can lead to buffer overflows and underflows causing dropped packets and stalled data flows. Other problems that can be experienced are associated with the timing delay and asymmetry of PTP packet flows. In hybrid SDI and IP workflows, it is also necessary to ensure that the relationship between the SDI and IP video is consistent to enable seamless frame accurate switching. This can be achieved by measuring the relationship between the Black Burst/Tri-Level Sync and the PTP clock, and making any necessary correction by skewing the SDI syncs with reference to the PTP clock.

In any digital system, jitter is any deviation from, or displacement of, the periodicity of the signal. In IP networks carrying constant bitrate data, jitter is the deviation from the periodicity of the packet arrival interval at a receiver. This can be caused by incorrect queueing or configuration issues, but assuming that the routers and switches are all configured and operating correctly, the most common cause of jitter is network congestion at router/switch interfaces.

A degree of jitter is inherent in any IP network due to its asynchronous nature. Obviously, the application within a network element can require the data to be received in a non-bursty form, and, as a result, receiving devices adopt a de-jitter buffer. The application then receives the packets from the output of this buffer rather than directly, with packets flowing out of the buffer at a regular rate, smoothing out the variations in the timing of the packets flowing into the buffer.

Accurate Measurements Solve Problems
Packets flow out of a receiver’s buffer at a steady rate known as the drain rate of the buffer. Conversely the rate at which a buffer receives data is known as the fill rate. Selecting the size of the buffer is important. If the buffer size is too small and if the drain rate exceeds the fill rate, then it is possible that too small a buffer could underflow, resulting in stalled packet flow. If the sink rate exceeds the drain rate, then at some point the buffer will overflow, resulting in packet loss. However, if the buffer size is too large, then the network element will introduce excessive latency. Network jitter causes the packets to become non-periodic and as such the buffer fill rate will no longer be constant. As the jitter becomes greater, the aperiodicity becomes larger. At some point this aperiodicity will lead to the condition where the buffer’s fill and drain rates become so uneven that the buffer will either underflow, leading to stalling, or overflow, leading to packet loss.

With the case of high bitrate video, either buffer underflow or buffer overflow will likely lead to impaired video. It should also be noted that port over subscription will of course also lead to packet loss.

As noted earlier, in networks carrying constant bitrate data jitter is the deviation from periodicity at a receiver. When given an accurate clock in the receiver, jitter can be measured simply by measuring the time stamps of the packet arrival times and plotting the inter-arrival intervals versus time.

This method is useful to identify variances in jitter over time, but it is also useful to be able to plot the distribution of inter-arrival intervals versus frequency of occurrence as a histogram. And if the jitter value is so large that it causes packets to be received out of the range of the de-jitter buffer, then the out-of-range packets are dropped. Being able to identify outliers is an aid in identifying if the network jitter performance is either likely to or already cause packet loss.

With constant high bitrate data, jitter distribution is also important to measure. If it is extremely broad, it is likely that network congestion and hence network jitter magnitude is significant enough to cause packet loss. The corollary is that if the jitter distribution is narrow and the system is experiencing packet loss, network congestion is unlikely to be the cause.

It might be assumed that this distribution measurement could be used to estimate the buffer size needed to de-jitter the traffic flow, but it is important to consider that it takes no account of the ordering of the packet inter-arrival interval samples. In essence, a series of packets with long inter-arrival intervals will inevitably result in a corresponding burst of packets with short inter-arrival intervals. It is this burst of traffic that can result in buffer overflow conditions and lost packets. This occurs if the sink rate exceeds the drain rate for a period of time that exceeds the length of the remaining buffer size, when represented in microseconds.

Burstiness leads to buffer overflow if: Sink rate > drain rate for a period of time_____ > remaining temporal buffer size.

De-Jitter Secrets
We have already seen that merely measuring the packet inter-arrival times cannot realistically be used to predict the necessary de-jitter buffer size. There is however an alternative form of jitter measurement know as Delay Factor (DF) that can be used to establish de-jitter buffer sizes. DF is a temporal measurement, which in the case of high bitrate video is represented in microseconds, that indicates how much time is required to drain a virtual buffer at a network node. At any given time, the DF represents the temporal buffer size at that network node necessary to de-jitter the traffic flow.

One such form of DF measurement takes advantage of the fact that RTP carries time stamp information. This is defined by RFC 3550 to reflect the sampling instant of the first octet in the RTP data packet (the time stamp format being the same as that of NTP). This measurement is known as Time-Stamped Delay Factor or TS-DF, as defined by EBU Tech 3337. This method is in the public domain, and is well suited to high bitrate media over RTP applications. TS-DF is based on correlating arrival times of network packets with the time stamp field in the RTP header.

The TS-DF measurement is based on the Relative Transit Time defined in RFC 3550 (RTP: A Transport Protocol for Real-Time Applications). This is defined as the difference between a packet’s RTP timestamp (held in the RTP header) and the receiver’s clock at the time of arrival, measured in the same units. The TS-DF measurement period is 1 second. In this algorithm, the first packet at the start of the measurement period is considered to have no jitter and is used as a reference packet.

For each subsequent packet that arrives within the measurement period, the Relative Transit Time between this packet and the reference packet is calculated. At the end of the measurement period, the maximum and minimum values are extracted. The TS-DF is calculated as: TS-DF = D(Max) – D(Min).

Unlike the jitter algorithm in RFC 3550, this algorithm does not use a smoothing factor, and therefore gives a very accurate instantaneous result.

Confronting jitter is just one of the new challenges that both video and network engineers face as the IP Video world continues to evolve. This and other network challenges require greater collaboration between video and network engineers in the future.

This article is adapted from a Tektronix white paper: Diagnosing and Resolving Faults in an Operational IP Video Network. For more information, please visit http://info.tek.com/rs/584-WPH-840/images/25W_60900_0_IP-Diagnostics_WP_.pdf.

Share.

About Author

Mike Waidson is Application Engineer within the Video Business Division, Tektronix. For more information, please email tekamericas@tektronix.com or visit www.tektronix.com.

Comments are closed.