Smaller Static Sites with New Formats
posted
While most web apps and web pages these days follow Wirth’s law, static web pages don’t. There’s only so much you can mess up when your goal is to display text and images.
That doesn’t mean there’s no room for improvement though, some new browser technologies provide tricks that can speed up your static web page. And many blog posts have already been written on things that can be done “for free”:
- Not using (blocking) JavaScript code
- Using HTTP/2
- Caching images, CSS and fonts forever
- Using a load balancer (e.g. Netlify, Cloudflare, Fastly or some other CDN)
I’m not going to delve into those, as many others have already written about that. Rather, I’m going to dig a bit into some other things I’ve done to make the fonts and images on this blog smaller, and some of the pain points around it.
Fonts
On this blog, I have 3 different fonts: The “logo” font, the text font, and the code listing font.
I host all of them myself for a couple of reasons. It’s mostly because I don’t want my website to break if e.g. Google Fonts for some reason is killed by Google. That’s very unlikely, but I’d rather not have to worry about updating my website because of third parties. There’s also the fact that the performance argument doesn’t really apply anymore.
Hosting things myself also means that I can tune them however I’d like. For that, I tend to get the original TTF/OTF-file and shrink it to WOFF and WOFF2 with two different programs:
sfnt2woff-zopfli
for WOFF, andwoff2_compress
for WOFF2
sfnt2woff-zopfli
uses, as you may expect, zopfli compression. The tool itself
claims to shrink 5-8% better than normal WOFF compressed files. From my
experience with zopfli, that doesn’t seem far-fetched. I also use a high
iteration number, -n 150
, which shaves off a couple more bytes without having
to wait forever.
For Google’s WOFF2 tool, there are no configuration flags, but it seems to compress pretty well for me. It’s smaller than the same WOFF file by a wide margin.
Both of these tools preserve metadata and don’t modify the font itself: This is necessary to adhere to the OFL license many fonts use.
Variable Fonts
One of the more recent font changes in the last couple of years has been browser support for variable fonts. If I had the option, I’d use variable fonts for all the fonts where I need different font weights. The compressed WOFF2 file has in my experience been smaller than two static variants of the same font. It won’t be worth it if you only use one variant, however, but most of us use italics or bold text for the main font these days. Can I use claims most browsers support this, and a fallback is pretty easy to set up.
I use this for my code font, but not for my main font – Crimson Pro – as I feel the weights (400 and 700) are too heavy. On this website, I use 300 and 600 instead. 400 and 700 work well for smaller font sizes, but my blog’s font size is pretty large compared to most other sites.
Technically I should’ve been able to use font-weight
to tune this, but for
some reason, Chrome decides to ignore it and sets the usual 400/700 weight
instead. Firefox works as expected here, and sets my font to the 300/600 weights
that I want to.
Now, I can use font-variation-settings
to force Chrome to set the right
weights, but using it feels like too much hassle to me. I’d have to use that for
all my fonts, and according to MDN, it’s a pretty low-level command I
shouldn’t use if I can use font-weight
to solve the problem. From the MDN
page on
font-variation-settings
:
Font characteristics set using
font-variation-settings
will always override those set using the corresponding basic font properties, e.g.font-weight
, no matter where they appear in the cascade. […]
Instead of bothering with that, I’m waiting for all browsers to handle it correctly. And while waiting, I have 4 different font files for Crimson Pro: A permutation of 300/600 and non-italics/italics. In practice, it’s not that bad because
- I cache the fonts forever
- The fonts not used aren’t loaded (e.g. bold italics aren’t that common), and
font-display: swap;
means the browser won’t block if there are 3-4 words of italics on a page
It still feels like a hack though, so I’d rather get away from it when I can.
Removing Unused Glyphs
While I do try to make my blog fast to load, I’m not religious about it. I do sin a little: for example, the name of my website is hyPiRion, and the title is using its own font – Alegreya SC.
Using one font for 7 characters is extremely wasteful, as there are several glyphs you won’t ever use. To avoid being too wasteful, I remove those glyphs from the font before producing WOFF[2] files.
The easiest trick I could find to shrink the font was this StackOverflow answer, but as mentioned this may break the license of the font you are minifying.
Alegreya SC is under the OFL license, and that one explicitly bars you from using any reserved font name in modified works. Alegreya SC doesn’t have any reserved font names, so technically I wouldn’t need to change the name. However, to avoid any possible confusion, I’ve changed the name to let people know that this is a minified version of the original. The full code thus ends up like this:
import sys
import fontforge
infile = 'AlegreyaSC-Regular.ttf'
outfile = 'AlegreyaSC-Regular-hypirion-min.ttf'
font = fontforge.open(infile)
for c in 'hyPiRion':
font.selection[ord(c)] = True
font.selection.invert()
for i in font.selection.byGlyphs:
font.removeGlyph(i)
font.comment = "Minified by Jean Niklas L'orange"
font.copyright = "2023 Jean Niklas L'orange, " + font.copyright
font.familyname = 'AlegreyaSC-Regular-min'
font.fullname = 'AlegreyaSC Regular-min'
font.fontname = 'AlegreyaSC-Regular-min'
font.generate(outfile)
The result is still rather big for only 7 characters, but it’s much smaller than the full font.
If I were to design a new page, I’d not use a “logo” font, and I may consider the system font stack – though I’d prefer to not have to test for multiple OSes.
Images
In the grand scheme of things, fonts aren’t that big of a deal for people that visit your site regularly. They are shared across all web pages, and compared to images, should be rather small. For me, images dominate the bytes transferred, and the reason I wrote this blog post was that I wanted to update my page to include new image formats. However, there’s still value in shrinking the old ones.
Shrinking PNGs
I’ve been fond of compressing PNG images for a long time, even before I started
with my blog. Nowadays I use oxipng
,
which is both pretty good and fast at compressing PNG images. If you have a
folder with PNG images, then this command will shrink all of them:
$ find . -iname '*.png' -exec oxipng -sao max -Z {} \;
$ find . -iname '*.png' \
-exec oxipng -sao max -Z {} \;
As an aside, a good blog post from Johannes
Siipola
that compares lossless image formats, mentions that oxipng is very
time-consuming with the zopfli flag enabled (-Z
). I do not experience that at
all though: For 74 PNG images, it takes my computer roughly 17 minutes to
compress them all with the command above. It probably matters more when the
resolution goes above the dimensions I work with, which are around 640x500
pixels at most.
Shrinking JPEGs
I haven’t really tried to compress JPEG images before this post. I’ve used the GNU Image Manipulation Program to shrink images down to the width of the article section (640px), and then pick 90 or something as quality. That’s rather high I suppose, and I was a bit tempted to leave these be and go straight for the new formats.
However, after reading up on whether WebP is truly better than JPEG, I found out that there is some value to compress the old JPEG images I have with MozJPEG. So first, I built the MozJPEG tools like so:
$ git clone https://github.com/mozilla/mozjpeg.git
$ cd mozjpeg
# install (build) dependencies as listed in BUILDING.md
$ mkdir build && cd build
$ sudo cmake -G"Unix Makefiles" ../
$ sudo make install
$ ln -s /opt/mozjpeg/bin/cjpeg ~/bin/mozjpeg
$ ln -s /opt/mozjpeg/bin/jpegtran ~/bin/mozjpegtran
I then take the original image, scale it to the desired size, store it in a lossless format (PNG), and run the following command to create the JPEG file:
$ mozjpeg -quality 85 -optimize tmp.png > result.jpg
$ mozjpeg -quality 85 \
-optimize tmp.png > result.jpg
I like the quality of images to be high, possibly a bit higher than what is
deemed usual for web pages. With the -quality
argument set to 85
, it seems
to produce images I am happy with quality-wise, and is a tad smaller than my
original JPEG images.
Because I want to do this with multiple files and with multiple formats, I’ve
automated this job with a python script. The source is available as the file
compress-jpg.py
, and the overall idea
of the program is as follows:
- Take the file from
file-orig.jpg
and dimensions from a file namedfile-orig.json
- Resize the file to a lossless format (PNG) with ImageMagick’s
convert
command - For each file format, run the compressor with the arguments that give the smallest size with the quality I aim for
I’ve intentionally ended the suffix for all of these with -orig.jpg
so that I
can do
$ find . -iname '*-orig.jpg' -exec python3 compress-jpg.py {} \;
$ find . -iname '*-orig.jpg' \
-exec python3 compress-jpg.py {} \;
in case I want higher quality on the images or want to convert to another format.
… there is a slight catch here though: I haven’t been smart enough to store the original images. Therefore I either have to
- Recompress the images and accept some compression artefacts
- Fetch the originals and modify them in roughly the same manner
Fortunately I don’t have enough images that option 2 is too time-consuming. However, due to link rot, I am unable to do it for all of them. For those images, I considered the saved image as the original, and accepted the compression artefacts it produced.
Producing New Formats
PNG → WebP
Converting from PNG to lossless WebP is mostly a matter of reading the cwebp
documentation to find out
that -q
specifies the compression factor (higher is better, but is slower),
-m
compression method (higher is better, but is slower), and -lossless
to
produce a lossless result.
$ cwebp infile.png -o outfile.webp -q 100 -m 6 -lossless
$ cwebp infile.png -o outfile.webp \
-q 100 -m 6 -lossless
From the documentation, this seems to be the same as doing -z 9
, but I am not
100% sure. They seem to give the same result though. The result is always
considerably smaller than my PNG images, so it’s an easy thing to do to shave
off bytes on your page.
JPEG → WebP
Converting JPEG/lossy images to WebP did require a bit of fiddling around with the parameters to get roughly the same quality. In the end, I ended up with the following options:
$ cwebp infile.png -o outfile.webp -m 6 -q 85
$ cwebp infile.png -o outfile.webp \
-m 6 -q 85
This does look worse than the JPEG in some cases, and better in others. The difference is small enough that it isn’t really noticeable – even if it is, it should be fine for my blog. And if I later decide that it isn’t, I can always tune the script and rerun it.
For example, here’s the pipe that looks a tiiiny bit sharper with JPEG than with WebP1:
Some people seem to recommend -af
instead of -q
, but I felt that contained
too many compression artefacts and was too blurry. Here’s an example of the
difference2:
The reflection of the turtles seems to be too aggressively blurred, and the dust/pollen on the water is also blurred away.
JPEG → AVIF
The newest kid on the block that’s supported by browsers is AVIF. From what I gather, this is great for replacing JPEG/lossy images, but not lossless ones (at least compared to WebP). For that reason I am only using this for lossy images.
The AVIF landscape is really hard to navigate when it comes to tools and input arguments. When encoding images, I really want two parameters: The encoding speed and the quality it produces. None of the standard tools seems to give me that, so I had to try out several tools to find one that suited me.
I eventually landed on cavif which has clear build steps, and works well for my quality/size target.
However, it still has a ton of different options. Searching around for any sane defaults I found a recommendation with 13 different parameters! I eventually found out I could drop them all and get more or less the same result with only two arguments:
$ cavif --cpu-used 0 --crf 18 -i infile.png -o outfile.avif
$ cavif --cpu-used 0 --crf 18 \
-i infile.png -o outfile.avif
This usually shrinks better than WebP, but on average not as much as avif.io claims it should. I guess that’s because I feel it blurs too aggressively in areas with little details, and tune the quality up to compensate.
JPEG XL Can Wait
There’s an image format that is designed to supersede both JPEG, WebP and PNG: JPEG XL. From what I can see, it looks super promising.
However, adding support for that is moot as of this writing: Can I use says no browser supports it by default, so I didn’t bother to look into it for now.
Falling Back to Old Formats
People recommend that you provide a fallback to JPEG and PNG whenever you use WebP or AVIF, to ensure that people with older browsers can see your images. Even though most people will be able to see WebP images, it’s not technically difficult to provide a fallback. If you change your good old image tag from
<img src="img.png" alt="alt" title="title">
to
<picture>
<source srcset="img.webp" type="image/webp">
<source srcset="img.png" type="image/png">
<img src="img.png" alt="alt" title="title">
</picture>
you’re effectively there. If the browser understands the picture
/source
tags, it’ll pick the first of those it is able to handle, and if not, it’ll fall
back on the img
tag. And from what I’ve understood, you only put alt
and
title
on the img
tag.
Practically speaking, that’s rather much effort if you use a static blog
generator that uses Markdown and has used ![alt](url)
until now. I use Jekyll
and decided to make my own liquid tag for this… or rather, tune my existing
liquid tag for images.
You see, I store all images with their shasum in a folder named sha
so that I
can cache them “forever”, and I got tired of doing that manually. Additionally,
I use the fastimage gem to provide the
image size in the tag, so that the browser doesn’t have to rerender the page
whenever it has fetched a new image.
For that reason, none of the images I add to my blog is using the Markdown image syntax, but instead use the following liquid tag:
{% shaimg 2001-01-01-mypost/image.jpg | My title | My alt %}
{% shaimg 2001-01-01-mypost/image.jpg
| My title | My alt %}
The tag checks if there are any AVIF/WebP versions of the image, and if so, orders them in with the AVIF version as the highest priority, followed by WebP afterwards.
Feel free to look at the file hypirion.rb
.
The code itself should be okayish documented and should work if you put it into
the _plugins
Jekyll folder, then add the fastimage
gem as a dependency.
Summary
There are new font and image formats you can use to speed up page loads for you, and here I’ve covered what works for me. In short:
- Compress WOFF files with
sfnt2woff-zopfli
- Compress WOFF2 files with
woff2_compress
- Use variable fonts if you use multiple weights and there is a variable version available
- Logos should not use a unique font if possible, but if you have to, shrink it down if the font license allows you to
- Store the original images you use in case you want/need to recompress images in the future
- Use MozJPEG to compress JPEG images
- Use
cwebp
to make WebP images out of JPEG and PNG images - Use
cavif
to make AVIF images out of JPEG ones - Automate the entire process for your static site:
compress-jpg.py
andhypirion.rb
are the scripts/plugins I’ve made for my Jekyll website - Wait for JPEG XL to take over for the other formats