Mikael's blog

A developers sixth time trying to maintain a blog

Decreasing Load Time When Using Google Web Fonts

I've been using Google Web Fonts ever since I started building this blog. It's an awesome service with a great user interface and it makes it really easy to add fonts to your web site.

Of course I had to find something wrong with it.

Earlier this week our company released our new web site. On our Yammer page, one of my colleagues posted a link to a WebPageTest result for it and it fared reasonably well. It got an "F" on the "First Byte Time" test, but that test is really finicky and could just as well show an "A" if you retake the test (as I later became aware).

This triggered my inner masochist and I just had to run the same test on this blog. The first result was this:

Test Score
First Byte Time A
Keep-alive Enabled A
Compress Transfer A
Compress Images C
Cache static content C
Effective use of CDN X

To be honest, I actually got an "F" on "First Time Byte" but then I ran the test a second time and got an "A". Another time I got a "B".

It turns out that "First Time Byte" includes the time it takes to perform the DNS lookup. As I don't wield much power over the DNS servers of the world, I ignored it when it came to my own server, but it would turn out to be quite the interesting thing when I went on to fix my C grades.

Compress images

This one was easy. WebPageTest complains if JPEGs are above 50% quality (with a 10% margin), so the image of my robot uploaded with my image uploader and re-sized on the client, was the culprit here.

I re-encoded it at 50% quality and updated my upload script to always use 0.5 as the quality constant.

Cache static content

Google doesn't add any expiration headers to the Web Fonts CSS. I'm guessing that this is because they make changes to the underlying code that generates this CSS sometimes, and the want the changes to propagate as fast as possible.

I am however not that concerned about my fonts as I've already tested that they do what I want, so I would prefer them to be infinitely cached. So I thought about hosting them myself, as I already was doing with my symbol font (Elusive Icons).

At first I thought, well that will put more strain on my server, and it definitely goes against CDN best practices. But then I went ahead and checked my timeline readings at WebPageTest, and it turns out that even though the font data is transferred quickly from Googles servers, the DNS lookup for those where taking quite some time.

fonts.googleapis.com took roughly 300 ms to lookup, but that wasn't the worst one.

themes.googleusercontent.com took more than a 1000 ms to resolve and that is the server that the font files are actually on.

Now this has to be taken with a grain of salt, just like the "First Byte Time" since DNS lookup could be really fast if the test was retaken. But it got me thinking.

No matter how you look at it, lofjard.se needs to be resolved, once, if you want to load this site. But then the resolved address is cached and every other subsequent request to lofjard.se will skip the DNS lookup step.

Hosting the web font CSS in my own CSS-file meant one less request, but hosting the web font files myself meant that the 6 font file requests where now directed at lofjard.se instead of another server. The question was if I could download the fonts from my own server in less time than I could DNS loopkup and download them from Google, even when taking a parallelization hit from the fact that most browsers only allow 4 simultaneous requests to the same server? The answer was YES.

My latest result from WebPageTest shows a much nicer score:

Test Score
First Byte Time A
Keep-alive Enabled A
Compress Transfer A
Compress Images A
Cache static content A
Effective use of CDN X

That dreadful "X"

I still get an "X" at CDN usage, and I've actually gotten worse at it since I now use two less CDNs than before. But it goes to show that while CDNs are great, they are not always the answer to performance.

by Mikael Lofjärd

Re-sizing Images in Javascript

Uploading files asynchronously with XMLHttpRequest is a neat trick on its own, but what I really wanted was a nice way to upload images from my phone and/or tablet.

The problem with this is that technology is evolving quite rapidly these days and my smartphone has an 8 megapixel camera. 8 megapixel pictures averages around 2.2 MB on my iPhone 5 and Chrome (and others) defaults file uploads with XMLHttpRequest to 1 MB. Now, one can easily find their way around such limitations but then you just run straight into the arms of the next thing limiting your upload sizes; the web server.

And even if you would change all the settings, to allow larger file uploads, you'd have to ask yourself if that's what you really want. It wasn't what I wanted.

Uploading photos from my phone is awesome since it's really an internet connected camera, but when I'm on the go I don't want to spend minutes uploading a file on a crappy cellphone connection. Besides, the largest size for images on my blog is 490 pixels, so what I wanted was to resize the images in the browser, before it's sent to the server.

Once again, HTML5 comes to the rescue.

Stretched on a wooden frame

The HTML5 spec introduced a nifty little element called canvas. It's actually pretty much the same as it's old namesake used for oil paintings since the 15th century. It is a blank area ready to be filled with pixels and some nice Javascript APIs for getting them there (and off of there).

1
2
<input type="file" id="file-upload" />
<img src="img/placeholder.png" id="upload-preview" />

Using the File API we can make the img-tag load our image as soon as we've selected it.

1
2
3
4
5
6
7
8
9
10
11
var fileUpload = document.getElementById('file-upload');
var uploadPreview = document.getElementById('upload-preview');
 
fileUpload.addEventlistener('change', function () {
  var file = this.files[0];
  var reader = new FileReader();
  reader.onload = function(e) {
    uploadPreview.src = e.target.result;
  };
  reader.readAsDataURL(file);
}, false);

To play with the FileReader you need a bleeding edge browser such as Internet Explorer 10, Chrome 7 or Firefox 3.6 (!!!). =)

All jokes aside, Microsoft has been late to the game but at least they have arrived.

What good is a preview?

The preview image in itself isn't helping us much with our client-side re-sizing, but this is where the canvas element comes in handy.

1
2
3
4
5
var canvas = document.createElement('canvas');
canvas.width = 640;
canvas.height = 480;
var ctx = canvas.getContext("2d");
ctx.drawImage(uploadPreview, 0, 0, 640, 480);

And voila! We have now redrawn our image on a canvas object at 640x480 resolution. In reality you'll want to calculate the width and height from the original image to keep the aspect ratio, but for example purposes this will do.

Uploading from the canvas

So we have our image resized in a canvas element. Now how do we get it out? The canvas element has a method named toDataURL that can help us with that.

1
2
3
var jpeghigh = canvas.toDataURL("image/jpeg", 0.9);
var jpeglow = canvas.toDataURL("image/jpeg", 0.4);
var pngmedium = canvas.toDataURL("image/png", 0.6);

Here I've created 3 different images. Two JPEGs and one PNG. As you might have deduced the first argument to toDataURL is a mime type descriptor and the second is the quality.

DataURLs are nice but I want a binary object that I can send with my XMLHttpRequest.

1
2
3
4
5
6
7
8
9
function dataURItoBlob(dataURI, dataType) {
  var byteString = atob(dataURI.split(',')[1]);
  var ab = new ArrayBuffer(byteString.length);
  var ia = new Uint8Array(ab);
  for (var i = 0; i < byteString.length; i++) {
      ia[i] = byteString.charCodeAt(i);
  }
  return new Blob([ab], { type: mimeType });
}

Before you go off to the comments, pitch forks in hand, shouting "ArrayBuffers are deprecated since Chrome v23!!", have a little faith in me.

I am very well aware (and Chrome reminds me every time I upload a picture) but sending the ArrayBufferView UInt8Array directly to the Blog constructor (as one should do) works great on the desktop, but it doesn't work on the iPhone. The above code works in both.

The catch

There always has to be one, doesn't there?

The thing about the above code is that it doesn't work all that well at all with the iPhone (or the iPad if you have one of those). The reason for this is iOS habit of not rotating photos when taken, but instead incorporate the Orientation flag in the EXIF metadata. This means that when you draw an image taken in portrait mode (on the iPhone, landscape mode in the iPad) on the canvas, it will be rotated 90 degrees. There is also an image squashing bug with files above a certain size but both of these are a subject for another time.

If you need a library that takes care of both of these issues I recommend that you check out the excellent canvasResize library by Göker Cebeci.

by Mikael Lofjärd

Async File Uploads in HTML5

Uploading files using HTML forms has always felt a bit off for me. You had to set your encoding to multipart/form-data and the synchronous nature of form posts always made it a waiting game when uploading larger files.

Then came AJAX and the dawn of "single page applications" and such buzzwords, but file uploads somehow got left behind. Javascript didn't have the access it needed to send files with the XMLHttpRequest. Years passed and along came the File API and XMLHttpRequest Level 2 (it seems to be called that again) with its upload attribute, support for byte streams and progress events.

Today I'm going to show you how to build an asynchronous file uploader with it.

We'll start with the HTML part:

1
2
3
<input type="file" id="files-upload" multiple />
 
<ul id="file-list"></ul>

There's nothing weird going on here; just a regular file selector and a list of uploaded files. The multiple attribute is added so that we can upload more than one file at the time.

We want our files to upload as soon as they are selected so we hook up to the change event on our file selector input.

1
2
3
4
var filesUpload = document.getElementById('files-upload');
filesUpload.addEventListener("change", function () {
  traverseFiles(this.files);
}, false);

The traverseFiles function checks for File API support and calls the upload function on each of the selected files:

1
2
3
4
5
6
7
8
9
10
11
function traverseFiles (files) {
  if (typeof(files) !== "undefined") {
    for (var i = 0; i < files.length; i++) {
      upload(files[i]);
    }
  }
  else {
    var fileList = document.getElementById('file-list');
    fileList.innerHTML = "No support for the File API in this web browser";
  }
}

Uploading

The upload function is where the actual uploading happens:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
function upload(file) {
  var li = document.createElement("li"),
    div = document.createElement("div"),
    progressBarContainer = document.createElement("div"),
    progressBar = document.createElement("div"),
    fileList = document.getElementById('file-list'),
    xhr;
 
  progressBarContainer.className = "progress-bar-container";
  progressBar.className = "progress-bar";
  progressBarContainer.appendChild(progressBar);
  div.appendChild(progressBarContainer);
  li.appendChild(div);
 
  // add list item with progress bar
  fileList.appendChild(li);
 
  // create the XMLHttpRequest object
  xhr = new XMLHttpRequest();
 
  // attach progress bar event
  xhr.upload.addEventListener("progress", function (e) {
    if (e.lengthComputable) {
      progressBar.style.width = (e.loaded / e.total) * 100 + "%";
    }
  }, false);
 
  // attach ready state change event
  xhr.addEventListener("readystatechange", function () {
    if (xhr.readyState == 4) { // if we are done
      if (xhr.status == 200) { // and if we succeded
        progressBarContainer.className += " uploaded";
        progressBar.innerHTML = "Uploaded!";
 
        div.appendChild(document.createTextNode(file.name + '(' + file.size + ')'));
      } else {
        div.className += ' upload-error';
        progressBar.innerHTML = "Failed!";
        div.appendChild(document.createTextNode(file.name));
      }
    }
  }, false);
 
  // open the request asynchronously
  xhr.open("post", "/upload", true);
 
  // set some request headers
  xhr.setRequestHeader("Content-Type", "multipart/form-data");
  xhr.setRequestHeader("X-File-Name", file.name);
  xhr.setRequestHeader("X-File-Size", file.size);
  xhr.setRequestHeader("X-File-Type", file.type);
 
  // send the file
  xhr.send(file);
}

Note how we bind to the progress event on xhr.upload. That is because the upload property is actually another type of XMLHttpRequest object that is similar but not the same as its parent. In this case, all you need to now is that all the events for the actual upload propagates on this object.

Receiving the file

The node.js side of things is not that hard either, assuming that the server below is forwarded to from the /upload URL that we posted to.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
var http = require('http');
var fs = require('fs');
 
function onRequest(request, response) {
  request.setEncoding('binary');
 
  var postData = '';
  request.addListener('data', function (postDataChunk) {
    postData += postDataChunk;
  });
 
  request.addListener('end', function () {
    var fileName = request.headers['x-file-name'];
 
    fs.writeFile(fileName, postData, { encoding: 'binary' }, function(err) {
      if (err) {
        response.writeHead(500);
        response.write("File write error.");
        response.end();
      } else {
        response.writeHead(200, 'application/json');
        response.write("Success!");
        response.end();
      }
    })
  });
}
 
http.createServer(onRequest).listen(80);

The important thing to note here is the request.setEncoding('binary');, at the beginning of the onRequest method, and the {encoding: 'binary'} option sent to fs.writeFile().

Of course, in a real world situation you might want some more logic in place to ward of misuse from malicious people or control what type of files get uploaded. In my case I added some logic for making sure (it's in the admin section of my blog so I don't have to be 100% sure) that I only upload images.

I also do some nifty re-sizing of my images before they are uploaded which takes care of long upload times and default size limits for XMLHttpRequests (Chrome defaults to 1 MB) but that is the subject for another time.

by Mikael Lofjärd

Pictures!

I've been hacking away at the blog again and made a little nifty photo uploader for the admin area. It resizes the image on the client (browser) before uploading it with Ajax which is great because now I can upload directly from my phone without worrying about file size.

The uploader really needs its own blog post but I'm heading out so in the meanwhile, here is a sneak preview of things to come!

Robotics

Oh, that's right; inline pictures are working as well.

Remember to refresh your browser cache if stuff looks weird.

by Mikael Lofjärd

Search And You Shall Find

Every blog should have a search box. Not because it's necessary, but because it's fun to implement.

A few weeks ago I ran across a small Javascript library called Lunr.js. It's basically a small text indexer that can rank search results and it's written entirely in Javascript, just the way I like it.

Setting up an index is really easy:

1
2
3
4
var searchIndex = lunr(function () {
  this.field('title', { boost: 10 })
  this.field('content')
});

Then you just add some documents to the index:

1
2
3
4
5
6
var doc = {
  id: "a-blog-post-title",
  title: "A Blog Post Title",
  content: "This is a crummy example blog post...",
};
searchIndex.add(doc);

Then you can search for it by simply calling searchIndex.search('crummy blog'); and that will return an array of objects with the properties ref and score.

ref is the id property of the indexed document and score is, well, how well it scored. The array will be sorted with the highest scoring result first in the array.

If you want you can supply a word list to the search index with words that should not be counted, but by default it has a list of common English words such as 'and' and 'if' so that they won't be ranked and affect performance.

Overall, I'm very happy with it. I created the SearchManager and connected it to the CacheManager so that it rebuilds the index if a new post is created or one is edited. I also configured the CacheManager so that it does not cache URLs starting with /search. That way I won't fill up my cache store with all different search results.

So, if browsing the archive isn't your cup of tea, then get searching folks!

by Mikael Lofjärd