Kraken.io Blog

Kraken.io Blog


December 2024
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Categories


Images Are Slowing The Web. Let’s Fix It.

Karim SalmanKarim Salman

A slow-loading website will frustrate your audience and new visitors and in general will be viewed less favourably than faster-loading rivals. Why? Mostly because humans are impatient creatures and will get bored and wander off in approximately three microseconds.

Here are some interesting references to support that assertion:

 

According to HTTP Archive, at the time of writing this article the average web page size is 2.3MB, up from 1.7MB, or an increase of 26% from 2014. To put that into perspective, the average size of today’s web pages is approximately the size of the original 1993 Doom game!

Can you guess what makes up most of that data? If you guessed images, pictures, or graphics, then you guessed correctly.

average-image-transfer-sizes-may-2016

Average web page size in May 2016

 

The above chart sourced from HTTP Archive shows the average size of image requests as of May 1st, 2016. Terrifying, isn’t it? It also shows that the average number of images on each page is now at fifty seven. FIFTY SEVEN images per page.

Why Do We Need So Many Images?

Well, there is no clear answer to that question. Images are a necessary evil to a certain extent, for many reasons. Here are a few I can think of:

  1. Aesthetics – Good looking images make for a better browsing experience, making users more likely to stick around and, erm, click around. A crappy-looking site will likely trigger the user to hit the back button and find a less-crappy-looking alternative.
  2. Impressions – A visitor who is impressed by professionally done and well-thought-out visual content is more likely to buy, or at least return.
  3. Necessity – A stock image site, art or photo gallery website, or a product catalog on an e-commerce site are necessarily going to publish a large number of images.

Let’s now assume that you have exactly the images you need, and reducing the number of images on your sites is not an option. Can something still be done to mitigate the bloat they are adding? Yes it can. In fact, there are a few things you can do about it, which I will discuss further on the article.

Most Of The Web’s Images Are Unnecessarily Bloated

Metadata Bloat

Photographs pack lots of metadata. This typically refers to Exchangeable Image File Format (EXIF) and XMP (Extensible Metadata Platform) metadata which reveals various information about the image, including the camera brand and model, the shutter speed, the date and time of capture, the orientation, and many other pieces of information which might be completely unnecessary for the purpose of enriching your web content or experience.

Such metadata can be stripped by the use of photo editing tools, standalone EXIF viewers/editors, and online tools such as Kraken.io’s Free File Uploader (which incidentally does a lot more than just strip away the metadata).

Excessive Quality Bloat

Images need to look great. Poor quality images leave a terrible impression to the extent that it would be better to not show anything than to show something pixelated or over-compressed – or should we say “over-processed” or “over-optimized”.

At the same time, images at their full quality, even in compressed formats such as JPEG, can pack a lot of weight. That weight can be substantially reduced without altering the visual qualities in any way, or at least in no perceptible way. This means that many (or most) images can shed a lot of bytes without any side-effect other than weight loss, and a reduced download size (and loading time!) for your visitors.

Excessive Image Dimensions Bloat

Images on the web can be squeezed to fit smaller or larger spaces at the discretion of the web developer. That often leads to images of larger-than-needed dimensions. With the use of HTML and CSS images can be squeezed into smaller spaces and containers, which essentially means that the resolution of the image is unnecessarily high for its purpose.

Again, images which are larger than the size of the containers and spaces in which they are rendered will pack unnecessary bytes of data, which can be trimmed away just by resizing them to fit the space available in the container and/or on the target device.

Using multiple image sizes and the HTML5 srcset and sizes attribute is great for auto-choosing the correct size for different target screen sizes and devices. In short: we need to get the natural size (or actual size) of the image as close to or equal to the display size (the dimensions at which it is displayed).

 

Ways To Reduce Image Bloat

Metadata Stripping

Use a tool to strip away EXIF and XMP metadata prior to publishing your images to the Web. Many such tools exist, regardless of your preferred Operating System. Just do a Google search for something along the lines of “Remove EXIF [Your OS]” and you should find at least a handful of tools, free and otherwise, which will get the job done one way or another.

Image Optimization and Recompression

Image optimization comes down to two activities. Reducing the number of bytes used to encode each image pixel, and reducing the total number of pixels.

An image’s file size, once metadata has been excluded, is simply the total number of pixels multiplied by the number of bytes used to encode each pixel of that image.

The two immediate reasons you should optimize your images are to save in terms of download time and how much bandwidth you are consuming.

Image optimization falls into two categories: Lossless image compression and lossy image compression. Lossless image optimization or lossless image compression usually means applying a variety of techniques to the image to reduce its byte size without changing a single pixel.

According to Wikipedia, Lossless compression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data.”

That is to say, no information has been lost. The image, in terms of visual and informational identity, is the original.

Lossy image compression refers to the application of techniques which produce and approximation of the input image. When an image is compressed using a lossy algorithm, at least some of the original information has been lost.

For example, JPEG is an image compression format which has a range of compression capabilities, allowing you to trade off between visual quality and file size. When a JPEG is produced from a source image, a quality level (or “q-level”) can be selected, with 100 being the highest quality level and 1 being the lowest.

In most cases, JPEGs with a quality of less than 25 will, in visual terms, deviate from the original so substantially as to be completely unrecognizable. Images with a setting of 80 or higher will, in many cases, look pretty good. There is, however, a catch. The catch is that to a large extent, the compressibility of an image depends on the image content itself.

The “right” compression level for a particular image can vary from person to person. In terms of compression for the Web, when undertaken as a manual task (for instance, using Pixelmator or Photoshop’s Save For Web capabilities) , the person compressing the image might think he or she has chosen the perfect quality point which balances fidelity to the original with acceptable or minimal download size. Another person might think that the resulting image looks terrible, and others might think that it could still be compressed some to save precious bytes and still be acceptable.

It is for reasons such as the above that Google’s own Web Perf Guru Ilya Grigorik has written:

Image optimization is both an art and science: an art because there is no one definitive answer for how best to compress an individual image, and a science because there are many well developed techniques and algorithms that can significantly reduce the size of an image. Finding the optimal settings for your image requires careful analysis along many dimensions: format capabilities, content of encoded data, quality, pixel dimensions, and more.

 

The Solution

Strip away unnecessary metadata. Recompress images to a quality level which yields substantial savings while keeping the image quality degradation below the perceptible threshold. Resize images to the actual dimensions at which they are displayed on target devices.

Automate all of the above, to save time and allow you to focus on what’s important to your business.

Kraken.io is an image optimization and compression SaaS platform with additional manipulation capabilities such as image resizing. Our goal is to automatically shrink the byte size of images as much as possible, while keeping the visual information intact, and of consistently high quality such that results never need to be manually checked for fidelity.

 

 

Comments 1