A Comprehensive Guide to WebP

WebP (pronounced "weppy") is an ambitious and promising new image format that was created by a team at Google back in September of 2010, but browser adoption has been slow.  At the time of this writing the only major browsers that support WebP are Chrome and Opera, even though support in native applications is strong (albeit with plugins in many cases).  In this article I want to present an unbiased, holistic view of the WebP landscape as it stands today.  My end goal is to further the conversation about alternate image formats and image compression on the web, because I believe it is one of the biggest opportunities for making web browsing faster across all devices and networks.

Setting the Stage

The description that Google provides for WebP is concise and explicit:
WebP is a new image format that provides lossless and lossy compression for images on the web. WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25-34% smaller in size compared to JPEG images at equivalent SSIM index. WebP supports lossless transparency (also known as alpha channel) with just 22% additional bytes. Transparency is also supported with lossy compression and typically provides 3x smaller file sizes compared to PNG when lossy compression is acceptable for the red/green/blue color channels.
The link above also has more information about how WebP achieves those smaller image sizes, as well as the container and lossless bitstream specifications.  There have been a number of posts about real world situations where WebP has succeeded, a few of which are linked below.
WebP is a very different beast today than it was when it launched.  In May of 2011, WebP's compression algorithm got a tune-up, an upsampler was added to the decoder, and support was added to some key applications.  In October of 2011, Google announced WebP support for animation, ICC profile, XMP metadata and tiling.  A month later lossless compression modes and an alpha channel (transparency) were added.  In August of 2012, version 0.2 of the WebP library was released, with improved support for lossless and transparency modes to compress images.  WebP also has an active mailing list.  

There is no doubt that Google is committed to seeing WebP succeed, and development is ongoing.  Now let's look at the main benefits that WebP provides.

Benefits of WebP

The main reasons for supporting the use of WebP are as follows:
  • Smaller images - this one is obvious, and was explained in detail above.
  • It can replace both PNGs and JPEGs, so we could use one image format for most web content.
  • It is royalty-free, open source, and anyone can contribute to or implement the spec.
  • The project is backed by Google, and thus has a lot of resources behind it.
That all being said, WebP comes with some downsides, as we will see next.

Drawbacks of WebP

Jeff Muizelaar, a Mozilla employee, wrote a post back in April of 2011 outlining some complaints about WebP.  His list is below, with rebuttals where appropriate.
  • Google's original study used JPEGs as the source images, and looked at peak signal to noise ratio (PSNR) as a comparison metric, which is "accepted as a poor measure of visual quality".  Google did a second study using Structural Similarity (SSIM) vs. bits per pixel (bpp) that showed similar results as the first.  The second study makes this concern moot.  
  • WebP lacks support for key image features, specifically EXIF data, ICC color profiles, alpha channel, and 4:4:4:4 YCrCb color representation.  As detailed above, ICC color profiles and alpha transparency were added in a later version of WebP.  I can't speak to EXIF data or 4:4:4:4 YCrCb color, but if you know more about this please let me know and I'll update this point. UPDATE: Vikas Arora from the WebP team let me know that WebP now supports EXIF and XMP metadata as well.
  • There is a cost associated with new image formats - developers have to decide which one to use and clients need to support it forever.   There are third party vendors (like Torbit) and open source modules (mod_pagespeed/ngx_pagespeed) that will automate this for you, so developers no longer have to do the work themselves.
  • The progressive decoding of WebP requires using a separate library, instead of reusing the existing WebM decoder.
There are some additional drawbacks not mentioned in Jeff's article:
  • During the transition period when support is partial, you have to have duplicate images on your server, creating both JPEGs and WebP images.  This eats up storage space, and can make image retrieval slower (for example, if your working set used to fit in memory, and after doubling it you need to go to disk for some images).
  • CDNs and intermediate proxies have to cache both versions of the image, reducing cache hit ratio.
  • Currently the lack of context in Accept headers means that all WebP requests are marked as Cache-Control: private so the origin can do user agent detection (more on this below).  
  • Rolling out new image formats is hard.
I want to go into those last two points a little more.  Ilya Grigorik wrote an excellent post about deploying new image formats on the web, and the challenges that it creates.  He brings up the Accept header point in the list above - the fact that Accept headers are not providing an enumerated list of the image formats that the client supports causes major issues for adoption.  This makes rolling out WebP more difficult, but these are problems that we can fix, as Ilya says in his article.  I don't want to rehash all of that here, so I recommend reading his post before continuing.

I also corresponded via email with Jason Weber, the Performance Lead on the Internet Explorer team, and he brought up another challenge with WebP - the time it takes for the downloaded image to actually appear on the screen.  His description of the issue is both thorough and easy to understand, so I am going to simply quote him here:
There are many dimensions to consider when thinking about the performance implications of image formats. Most people gravitate toward quality (subjective) and compression sizes (performance). Both are interesting, but the elapsed time required to download the image is only part of the problem. After the image arrives on the local system there are several steps before it’s seen on the screen. The exact steps depend on the browser, operating system, and hardware. The basic pipeline looks something like this:
  1. System memory is allocated and the encoded image is stored in this memory. 
  2. Additional system memory is allocated, and the image is decoded at native resolution into this intermediary.
  3. The native image may need to be converted into another intermediary format.
  4. The appropriate intermediary is copied to the GPU (on hardware accelerated scenarios).
  5. Finally, the image is drawn to the screen.
On mobile caliber devices it’s not uncommon to see these operations take several 100ms. Some browsers attempt to do this work in parallel, so it may not directly block execution on the primary thread, but the elapsed time of this process impacts what are often referred to as TTG (time to glass) metrics.  
To improve TTG on the client you need to streamline this process. GPU’s are massively parallel and highly specialized silicon which can perform graphics operations more efficiently than the general purpose CPU silicon. Some operations can be performed 1000x more quickly through the GPU. There’s broad hardware support for traditional image formats like jpg in hardware today which speedup this process, and these opportunities don’t exist for WebP. 
For WebP to make the web faster the elapsed time downloading the images (from fewer bytes) needs to be greater than the additional cost of getting the image to the screen (TTG). That’s not the case in the real world today. And over time, other image formats including jpg have more headroom to improve. Modern hardware is amazing!
This is a point that few people talk about, and it's great that Jason was willing to explain it.  If you spend more incremental time getting an image actually displayed to the screen than you save by downloading fewer bytes, that image has actually made the web slower.

I corresponded with Ilya about this issue, and he said that while WebP decoding is about 1.4x slower than JPEG, this difference is largely overwhelmed by the speed improvement that comes from sending fewer bytes across the wire.  Some additional comments from him are below.
First off, to give the IE team credit, I do think they are ahead of most other browser vendors when it comes to their GPU + mobile browser story -- in fact, probably significantly so. I think Chrome and FF will close this gap in the not so distant future, but credit where credit's due. Now, having said that, I don't know the specific details of Jason's test, but my guess is, he's testing image intensive apps, with good network connectivity (3.9/4G), on latest Nokia hardware, and with minimum/no concern for bandwidth caps... That's fair, but that's not representative. 
Most users are stuck on non HSPA+ connections, with middle of the road hardware, and are using an older (year or more) WebKit build. This profile, compared to what he's [likely] analyzing, is night and day. In fact, in markets where mobile browsing is already dominant (India), bandwidth is the #1 concern - bits are expensive, literally. Hence, I take any argument about GPU performance with a grain of salt. Yes, it is definitely an important metric, and one that all of us will likely pay much closer attention to in the future, but it is not the limiting factor today for the vast majority of the population.

This is where WebP helps and why we're investing all the effort into it... Not to say that WebP is the answer to all things. It just happens to be the best option at this very moment. I hope we have more image formats, with even better compression and better "time to glass" in the future! These requirements are not entirely exclusive.
There is an important distinction to mention here: when we are talking about hardware accelerated browsers where image decoding happens on the GPU, lack of hardware support for WebP can cause a significant increase in TTG.  If we look at browsers that do image decoding on the CPU, the time difference is not as large (~1.4x as Ilya mentioned).  In the former case, it is likely that in the real world increased TTG time overwhelms the decrease in download time from WebP in at least some, and possibly most cases.  In the latter case, the decrease in download time likely dominates the difference in decoding time in at least some, and possibly most cases.  We clearly need more hard data to be published in this area to further inform this debate, and I will update this post as studies are published.

I'll also reiterate that if you take speed out of the equation and just talk about the cost of the bytes on the wire (due to bandwidth caps), WebP provides significant and unambiguous benefits.  

Additional Discussion About WebP

There has been a lot of talk about support for WebP in different browsers, and this bug that was filed to add it to Firefox has almost 200 comments at the time of this writing.  In comment 27 a Mozilla employee indicates that support will not be coming to Firefox.  In comment 146 he elaborates, stating that the main thing that would change his mind is support in IE and/or Safari.  A code patch that adds WebP support to Firefox has already been attached to this bug, so at this point the only thing Mozilla needs to do is hit accept.

There is also a JavaScript shim that adds WebP support to all WebM compatible browsers (which includes Firefox), but this kind of shim is a temporary, nasty thing, and you incur an extra blocking JavaScript request to implement it.  For WebP to truly succeed we need native support in all major browsers.  

Conclusion

According to the HTTP Archive, over 60% of the bytes on the web come from images.  Image transfer size also has a high correlation with load time.  This means that images are a great place to look for improvements in performance.  In addition, total transfer size of web pages has been steadily increasing over the last few years, with no sign of stopping.  At the same time, average bandwidth and latency for web users have been getting worse, as more and more people use mobile devices (to be clear, bandwidth in the home has been improving, but the switch to mobile networks offsets this improvement). We need to find some way to reverse this trend, or the web will continue to get slower.  WebP is a potential answer to this problem, or at least one of the potential answers.

That being said, there are significant challenges that WebP still has to overcome, the largest of which is finding an easy way for companies with a large, dynamic working set of images to deploy WebP.  Fixing Accept headers is the first step here, and the Chrome team is currently working on this.  As discussed above, a number of automated optimization services will handle the work of converting images to WebP and serving them to the right people, but we still need to fix Accept headers so we can make these products (and intermediate caches) more effective.

If we are able to overcome these challenges, here are some rough numbers that gauge the size of the opportunity.  When Google Instant launched, they claimed that if everyone in the world used it we would save more than 3.5 billion seconds a day, or 11 hours saved every second.  This works out to about 450 human work years (assuming 40 hour work weeks) of time per day.  If you think about the time savings we could get by reducing the cost of delivering and rendering images on the web, you can very quickly see that we would save many times that.  This means that billions of dollars worth of human time could be saved each year by finding a solution to this problem that works universally.

WebP also provides a tremendous amount of value on mobile devices with bandwidth caps.  As I said above, 60% of bytes served on the Web come from images, and for many mobile sites/apps that number is going to be much higher.  Take any mobile shopping app for example: once you have the app on your phone, you only have to download a few bytes of JSON data and images for all of the products you are looking at.  WebP is a huge win here.  Even if the increased decoding time offsets some of the improvement in download time, people are still saving money on bandwidth usage.

In addition to talking to Jason and Ilya, I've reached out to Mozilla to get more opinions about why adoption has been slow, and I will update this post as I get more feedback.  If you have more examples of WebP studies, additional benefits/drawbacks, or information about the political and technical reasons that are blocking adoption, please post in the comments.  If you prefer to contact me anonymously, you can email me directly at jonathan.n.klein@gmail.com.

Improving the web is difficult, but in the words of Ilya Grigorik, "some uphill battles are worth fighting - I think this a good one."


Comments

  1. Excellent post Jonathan! I don't have any deep studies on WebP to share, however the only time I covered WebP was back in 2011 Performance Calendar (http://calendar.perfplanet.com/2011/pure-css3-images-hmm-maybe-later/) when I compared several image formats to a CSS3 one. Besides load time I also compared rendering times and all binary image formats performed pretty much the same. Back then WebP was by far the smallest in size but it's correspondent data uri version ended up 2.7% larger whereas png and jpeg data uri were around 1.5% larger. It doesn't tell much since it wasn't an adequate sample but you can notice the rendering performance of WebP as data uri was the worst among the other binary formats.
    One thing that just occurred to my mind is regarding CPU/GPU usage and battery life. If WebP is smaller thus faster to load on bandwidth constraint networks like carrier networks but slower to render (TTG) due to increased CPU/GPU usage, I wonder what's the average battery autonomy of smartphones if one is browsing photo albums apps like Flickr. With CPU throttling to save battery life it might take even longer to render WebP images. This is just my initial guess, unfortunately I have no tests nor numbers to support that.
    I hope to see more data-driven studies around WebP performance and looking forward for major browsers adoption.

    ReplyDelete
    Replies
    1. This is a great point, throttling could exacerbate the decoding problem, and more CPU decoding time in theory should result in reduced battery life if no throttling is taking place. Like you said, we definitely need more studies around this!

      Delete
  2. WebP is a big improvement, but we're not using existing formats to the fullest yet.

    For example transparent PNGs from Photoshop's "Save for Web" are usually *4 times larger* than they could be if they were saved properly!

    http://pngmini.com/

    and well-generated PNGs have reasonable size compared to WebP:

    http://pngmini.com/vs-webp/

    Same goes for JPEG. Most software uses 19 year-old libjpeg! JPEGMini has shown that the same format, but with modern encoder, can achive much better compression:

    http://jpegmini.com/

    Even old tools with more CPU time can reduce file sizes significantly:

    http://imageoptim.com/

    So at least before throwing old formats away use them to their full potential.

    ReplyDelete
    Replies
    1. I agree, and Joshua Bixby just posted on his blog about this exact topic:

      http://www.webperformancetoday.com/2013/02/20/image-optimization/

      There's a lot of opportunity with existing formats, and we can do even better with something new, once we solve the problems mentioned in the post.

      Delete
    2. JPEGmini actually produces noticable worse quality images, so basically it just recompresses images. This can be acceptable, but for me it compresses too much and makes images looking blurry and dirty.

      Delete
    3. pngmini.com (and similar tools like tinypng.org/) introduce lossy techniques as a preprocessor step to the lossless PNG format.

      They're very handy if you're tuning a website and want to reduce the size of a heavily used image, but it's a bit of a stretch to say that using them is saving "properly" since it literally throws away data (and in the case of PNG8 limits you to 256 colors). You're possibly getting them mixed up with the tools that strip metadata and perform lossless optimisation of images like OptiPNG and similar.

      And of course, those same lossy techniques of reducing the number of colors can be applied to WebP lossless+alpha images, though you probably wouldn't want to because WebP already has "proper" lossy+alpha built into the format (and generally the only reason you'd lossily compress a PNG in this way rather than use a JPEG is to take advantage of the alpha channel that the latter lacks).

      I just noticed your link to a comparison against WebP used in exactly that way. It strikes me as more than a little misleading as it compares very lightly compressed webp files, originally created to benchmark against lossless techniques. It would seem logical that WebP could exceed the quality per bit of his tool as you reduce the quality of encoding (as well as providing more flexibility) or else we'd be using quantised 256-color PNGs instead of JPEGs everywhere on the web today.

      Delete
  3. What's great about this post is that it moves from performance optimization to sustainability. We see this in other areas of design (e.g. industrial design and architecture) but until now the "sustainable web" has just been about local efficiency. Considering the value of a new efficient image format in the context of the larger internet ecosystem is pure sustainability thinking!

    ReplyDelete
    Replies
    1. That reminds me when I first joined Yahoo! Frontpage team, I noticed several teams (editorial, ads, etc) were responsible for filling the content of different parts of the same page but the overall page performance wasn't good. Nobody wanted to assume responsibility for pushing non-optimized images so I ended up creating a tattle teller tool that ran smush.it daily on every single image found on the page summarizing the missed % savings reporting which team, according to where on page the image was found, was responsible for such performance regression, notifying the team with environmental/sustainability appealed messages :-)

      Delete
  4. Actualy webp has strong support (see wekipedia webp/ support). The only thing that webp needs to turns mainstrean is google implement it in android camera app in way that milions of mobile phones take their photos in webp.

    ReplyDelete
    Replies
    1. I actually link to the Wikipedia page in the first paragraph of the article, and I state that native application support is strong. Having WebP support in the native Android camera app would help, but we still need to have support in the browsers for WebP to work on the web. In addition, many of the improvements from WebP come when you start compressing images - you can get a smaller file size with the same or better quality as JPEG. When it comes to pictures on a phone, people typically want the highest quality possible. Then they can decide to downgrade it in the future when they publish the photo to different locations.

      Delete
    2. Google may put an option in android camera to save in webp loseless mode with highest quality possible. This will make pictures with more quality than jpg in 0% compression because jpg introduce artifacts even in 0% compression mode.

      Delete
  5. "WebP decoding is about 1.4x slower than JPEG"

    I feel like this rather significant con is still not being properly addressed.

    Can WebP decrease relative filesizes enough to make up for that gap? I don't doubt that it could do so versus a 10KB 2 megapixel jpeg, but what about with "real" files? How would WebP fare against a medium quality Photoshop jpeg? If the whole world dumps jpeg for WebP tomorrow, would the experience of loading a web page move forward or backward? The post did make a mention about data caps, but how much can WebP *really* save versus jpeg, and can you really use the internet if your phone's battery is dead?

    Last week, I made a series of post on the Doom9 (video encoding/compression site) forums trying to quantify various image compressors's performance. Performance on the Kodak test set and a subset of the JPEG-XR test set was compared against Photoshop JPG's own performance using a recently updated and improved version of SSIM, IW-SSIM, from its original author. You can find the thread in the Gen. section of the forum, but here's the gist: http://i2.minus.com/iFUn98JTZnONA.png

    My own interpretation is that WebP's 10-15% improvement is not enough, and that JPEG's successor is not yet in sight, and that it won't come until the magical -50% at 0.5-1.0 bpp is achieved. That's also the practical standard for the video compression world, which is even MORE desperate for compressive gains.

    ReplyDelete
    Replies
    1. Most real world studies are showing 25-35% improvements, not 10-15%, but I understand the concern that it's still not enough. We should keep working to improve WebP, and if another image format came around that was a significant improvement over WebP, I'm sure that Google would back it.

      The post does go into the question of whether universal WebP support today would make things faster or slower - it largely depends on the browser (hardware accelerated or not) and the device. In aggregate it's hard to say, this is an area where we need more studies done. There are number of companies with a ton of real user performance data now, and I'm hoping that one of them will be able to publish a study that looks at the performance impact of WebP across many browsers/devices.

      Delete
  6. Can I sugest one aditional info for you blog?

    Take firefox patched with webp on windows and osx

    Get google webp page copy and google jpg page copy

    Benchmark it with bytes transfered on post and javascript time to check page rendering data.

    Submit the results to mozilla bug.

    ReplyDelete
    Replies
    1. I actually don't think this is what is blocking Firefox from supporting WebP. The engineers at Mozilla are aware of the studies that have been done, and know the stats about the file size improvement that WebP provides. I got some info from Jeff Muizelaar about this, and at this point it sounds like it's a question of whether or not WebP is good enough to justify the costs of supporting it. This is a valid point, a lot of ongoing effort will go into wide support of WebP, and browser vendors have to decide if it's worth backing WebP or waiting for something that provides a larger improvement over JPEG. It's a tough question, and there's no easy way to make this determination.

      Delete
    2. Note that reduced file size is not the only goodness that webp provide.

      WEbp unifies features from jpg/gif/png in one format.

      Also it provide apng/mng features too.

      There arent anothet format with all these features and we webdeveloppers need them.

      Mozilla is blocking the advance of the web like microsoft blocked with not suporting svg.

      This is my point of view...

      Delete
  7. Great wrap up on this very interesting technology, just didn't know about it.

    Maybe it would balloon this article, but what about some comparison with SVG?

    ReplyDelete
    Replies
    1. Thanks! SVG is an extremely interesting technology as well, and I think it deserves its own post, since it really occupies a separate space. You would never use vector graphics for photo quality imagery, for example. Vector graphics have also been around for quite some time, and as a result there is a ton of information about them available on the Web. I wanted to write this article about WebP since I felt like there wasn't a single place that you could go to get a holistic and unbiased overview.

      Delete
  8. Like opera turbo now google is implementing their compression tecnology in their proxys servers and converting images on the fly to webp using spdy4 protocol. This feature is availiable to android chrome beta users.

    ReplyDelete

Post a Comment

Popular posts from this blog

14 Day Challenge - Information Diet

Trust, but verify

Cognitive Biases in Software Engineering