By Adam Schadle
Multiscreen delivery presents challenges in that it requires perfecting a number of delivery profiles in order to accommodate every screen, yet doing so is impossible unless everyone in the organization is sensitive to the importance of testing. Some people might take the “don’t fix what isn’t broken” approach and simply ignore services that seem to be working. That’s an attitude that can put the organization’s image at stake. What happens if one of those services is delivering poor video quality? By the time a viewer calls customer support to report the problem, it’s too late. Too many of those quality issues can lead to lost viewers, subscribers, and advertisers, and can even get an operator into trouble with compliance regulators.
Whether it’s real-time TV channels delivered through cable IPTV or satellite, file-based delivery of VOD content, or streaming content over the Internet, it’s critical to test the quality of the output from a lab based or actual network delivery system — both before and after channels go into service. Most operators put a lot of effort into choosing the right processing equipment and settings for the job, but all of that effort is for nothing if they overlook quality testing. In fact, given the increasing number of variables, requirements, and regulations involved in program delivery today, it’s almost impossible for operators to test too much.
The Beauty of Full-Reference Quality Testing
When thinking about video quality analysis, think full-reference testing. Why? Because full-reference testing is the most accurate method for assessing quality changes. The basic idea is to take a short video clip, send it through a system to be tested, and then compare the output of the system to the original. Any differences in any of the video frames caused by the system can be measured using a variety of objective metrics in these side by side tests. You get numerical results that very closely approximate what a human viewer would judge the video quality to be when watching the same video sequences on their own devices. In terms of time and expense, full-reference testing produces very accurate and highly repeatable results at a much lower cost than subjective video testing with human viewers.
The accuracy of a full-reference solution cannot be overstated. Such a solution eliminates the need to rely solely on an engineer to make quality decisions. While they still do visual assessments, engineers also have an objective, highly accurate, numbers-based method and scoring system for quality that correlates to the subjective score they would assign during visual evaluation.
Full-Reference Quality Testing Before Channel Deployment
With the proliferation in the amount of programming and the number of devices and screen sizes today, testing the output of various video delivery protocols and equipment is a crucial step in deciding which combination yields the best downstream video quality — and therefore which systems are worthy of investment.
By verifying the quality of a delivery system’s output in the lab before putting a channel into service, IPTV operators not only help ensure the effectiveness of that delivery system today, but they can determine the best combinations of equipment and delivery protocols to implement as the technology evolves.
In the IP domain, unlike in traditional television, the video coming out of the system will often be delivered at a lower resolution/framerate than the source material in order to accommodate equipment other than televisions. As a result, operators first have to reduce the resolution/frame rate of the source material in order to create an acceptable reference for the test. Only then does it make sense to compare it to the encoded/transcoded material that the end user will see.
In addition, it is becoming more and more economical to deploy delivery systems that adapt to the conditions of the network and the requirements of end devices — which means that content providers must be prepared to deliver multiple profiles (resolutions, bit rates, frame rates) for every asset, with understood levels of quality for every instance of the delivery chain and end-device type.
To address that requirement, many operators have created multiprofile adaptive streaming services for which they perfect a fixed set of delivery profiles that will satisfy most of the target devices at any given time. In order to test those profiles, operators must compare the (most likely) higher-resolution source video to the lower-resolution video that will ultimately come out of an encoder on the viewer’s end.