Monday, February 23, 2009

HPA Retreate Notes

There was an interesting session at the HPA (Hollywood Post Alliance) retreat last week where many studios as well as a representative of the EBU (European Broadcasting Union) talked about their real world experiences with video encoding and distribution.
Here are my notes.

The EBU has published their recommendations for HDTV video compression for acquisition, production and distribution in this document: http://tech.ebu.ch/docs/r/r124.pdf
The full report is only available to EBU members, but we can take note that:
  1. Progressive pictures look better than interlaced.
  2. Acquisition is recommended at 4:2:2 where 8bit is sufficient.
  3. To manintain quality after 7 cycles of encoding, Long GOP, CBR, MPEG-2 at 50Mbit/sec at the least is recommended (post and production).
  4. Distribution using H.264/AVC, CBR encoding requires 50% less bit rate than MPEG-2.
  5. Interlaced video requires 20% more bit rate than progressive video for the same quality.
  6. The dominant image quality impairments are determined by the distribution codec.


In that panel, there were engineers form three mayor broadcast studios who revealed their own conclusions:

MPEG-4 @ 20Mbits/sec, MPEG-4 @25Mbits/sec and MPEG-2 @45Mbits/sec are used in their facilities to maintain reasonably constant quality.

Monday, February 16, 2009

Automated Transcoding and Publishing

The panacea of original video publishing automation would be to go from final edited version to the asset management system to the web and other distribution channels. And along the way collect all relevant metadata, generate thumbnails and create videos that suit players that can capture relevant advertising based on the content.

A few companies have made great efforts to automate this kind of work flow, but one that stood out for me recently is Entriq’s Dayport solution: http://www.entriq.com/.

Their solution allows content creators to integrate their workflow with FinalCut (and carry relevant metadata to the content management system), live capture off air or tape (which can be scheduled to trigger automatically), thumbnail generation, create rules for security and usage policy, DRM, syndication and CDN integration for delivery. Companies that syndicate content to multiple partners can automate the process and save lots of time re-formatting metadata, transcoding, etc.

Entriq provides a UGC platform that incorporates an approval process along with download controls and players that make the automation of video workflows from creation to publishing possible.

Wednesday, February 11, 2009

Cloud Video Transcoding

Encoding.com offers video encoding services through a web service. Transcoding jobs are submitted using their XML API where video parameters, source, destination and logo insertion information are specified. They use Amazon.com EC2 to provide scalable transcoded nodes. They have a great cost estimate tool and case study here: http://www.encoding.com/pricing/.

Then there is Panda: http://pandastream.com/. It is an open source solution that runs on Amazon EC2, S3 web services. The main issue with this is licensing as it is an open source application based on ffmpeg.

Zencoder, http://zencoder.tv, in beta also runs on EC2 or your own Linux hardware. It works with On2 flix engine and ffmpeg.

Sorenson squish: http://www.sorensonmedia.com/products/?pageID=1&ppc=12&p=41 It is a distributed, Java based, client side encoding solution that uses the submitter’s computer as the transcoding node.

Now for the plug, Framecaster iNCoder Pro: http://framecaster.com/products.html. It is a distributed, client side, web plug-in. Like squish, it performs all video encoding on the submitter’s computer directly from a DV Cam, Webcam, or a Bluetooth enabled cell phone. It offers a Javascript API to integrate with an asset management system with ease. It can output H.264, FLV (VP6), MPEG4, MPEG2, or WMV.

A couple of questions about Panda and Zencoder are that since they use and deploy ffmpeg: How does one manage the licensing of codecs across all the different IP owners? Is that a potential liability on patent infringement? Maybe someone out there has an answers to those questions.

Monday, February 9, 2009

Standard VIdeo Players

When it comes to transcoding, having a standard video player will make sure that videos look as they were intended. It will help the encoding professionals deliver what the audience expects and provide an engaging user experience.

Akamai has spearheaded the Open Video Player initiative to provide a set of Flash and Silverlight classes, sample code and other documentation that serves as a media framework and a path to standardize video players.

The application structure from the engineering perspective is described in figure 1 below.

The sample code will give you a head start in creating and fine tuning your own player that adheres to these set of best practices. There are many samples on how to integrate with advertizing platforms and create custom skins. You’ll also get samples on how to connect to Akamai’s servers and manage different types of streams and playlists.


Check it out: http://www.openvideoplayer.com/

Saturday, February 7, 2009

Video encoding basics

I came across a great video post by Lisa Larson that talks about the basic encoding concepts as well as very useful tips to make your videos look great. For those of you who use the Adobe Media Encoder, there is a great tutorial on how to do an encode and tweak parameters.

Check it out: http://www.flashconnections.com/?p=66

Wednesday, February 4, 2009

Resizing Affects Color?

Resizing YUV type video might cause undesired color artifacts. These artifacts manifest as color blotches and sometimes color banding in areas of the image that has very smooth color or gradients.

4:2:2, 4:2:0, 4:0:0, etc. often refers to a YCbCr or YUV (Y’UV) image where Y is the Luma (intensity or detail) and UV are the chrominance components (color.) You can think of these three components as three planes Y, U and V that when super imposed create a color image. More on the subject here: http://en.wikipedia.org/wiki/YUV.

In the case of 4:2:2 images, the horizontal size of the chrominance planes is reduced by half, usually by discarding every other pixel. 4:0:0 images have both horizontal and vertical their UV planes' size reduced by half. When recreating the original image, the UV planes are enlarged to their original size by using simple linear interpolation methods. This process in itself causes artifacts that are compounded when the video is resized (see previous articles on resizing algorithms.) These errors in the UV planes are more obvious in smooth color or slow gradients.

Whenever possible, it is better to resize your images in the RGB or 4:4:4 space and then convert to a compressed YUV type format to minimize artifacts. Try to follow this rule especially with computer animation or other computer generated images.

Sunday, February 1, 2009

Non Square Pixels

I found a really good document that describes aspect ratios and non-sqare pixels. It is worth a read.