Mikael's blog

A developers seventh time trying to maintain a blog

And just like that...

… I started on yet another rewrite.


I’ve had my eye on Astro for a while now. One of the things that caught my eye was just how much it looked like Svelte.

Consider the following examples:

import BlogPost from "../components/BlogPost.astro";

const allBlogPosts = await fetch("https://blog.lofjard.se/api/blogposts/all", data => {
   return await data.json();

<div class="blog-list">
{ allBlogPosts.map(item => (
   <BlogPost item={item} />

.blog-list {
   width: 600px;
   margin: 0 auto;


import BlogPost from "../components/BlogPost.svelte";

const allBlogPosts = await fetch("https://blog.lofjard.se/api/blogposts/all", data => {
   return await data.json();

<div class="blog-list">
{#each allBlogPosts as item}
   <BlogPost {item} />

.blog-list {
   width: 600px;
   margin: 0 auto;

They both fetch some blogpost from an api and passes each post along to another component that renders them. The general structure is the same, e.g. there is some script code up top, some basic templating in the middle and some CSS at the bottom.

The difference

Well the difference is in how they handle things. While both Svelte and Astro can be configured for SSR (Server Side Rendering), the default in Svelte is to work as a SPA framework while the default in Astro is to render everything at build time into static files.

Astro has a “zero client-side javascript” policy by default, but certain areas can be “hydrated” with scripts from other frameworks such as React, or in my case Svelte. These areas are called “islands” in Astro-lingo and I plan on using this both for the commenting system and for my admin area.

Will it ever stop?

I honestly don’t know. I’m having a lot of fun learning the ins and outs of different frameworks, and this blog always seem like the perfect victim for my latest whim.

There is still some stuff to be done with the design. Not all functionality is done yet. I mentioned which parts in the last blog post. I plan on releasing a first version of this before my summer vacation is over, and since this blog post is written as Astro markdown content, if you are reading this then I have already released it in its current form.


  • It’s a neverending cycle of rewrites.
  • All public facing sites are now static, generated by Astro.
  • Admin area, and comments section will include Svelte “islands”.
by Mikael Lofjärd

Svelte-Blog 0.1 (alpha)

New face

The blog has gotten a face lift with some fresh new styling. In the last 8 years I’ve probaly redesigned it 4 times. It always ends with me not completing my design and/or getting bored with it. Even this design is not the one I spoke of in the last update.

New tech

Even though the design is new, the blog itself is the same one I wrote about last year. It’s written in Svelte, and compiled to a static page. Instead of having my own API in Node.js it talks to the CouchDB database directly through its REST API.

Authelia is setup to handle authentication and authorization to the admin area and database backend, ensuring write access to only those deemed worthy.

Missing features

There are currently a lot of things missing in this blog that were available before. They will get implemented over time, but I figured I had better get this version deployed now or it might never happen at all.

The missing feature list includes (but is not exclusive to):

  • Source code view (it’s on GitHub now instead)
  • Volumetric tag cloud
  • Comments (they are readable, but you can’t post new ones)
  • Search (this one usually stopped working randomly in the old engine anyway)

New life

Will this mean that I will write more blog posts? Who knows? I can’t make any promisses. Only time will tell.

by Mikael Lofjärd

Decreasing Load Time When Using Google Web Fonts

I’ve been using Google Web Fonts ever since I started building this blog. It’s an awesome service with a great user interface and it makes it really easy to add fonts to your web site.

Of course I had to find something wrong with it.

Earlier this week our company released our new web site. On our Yammer page, one of my colleagues posted a link to a WebPageTest result for it and it fared reasonably well. It got an “F” on the “First Byte Time” test, but that test is really finicky and could just as well show an “A” if you retake the test (as I later became aware).

This triggered my inner masochist and I just had to run the same test on this blog. The first result was this:

First Byte TimeA
Keep-alive EnabledA
Compress TransferA
Compress ImagesC
Cache static contentC
Effective use of CDNX

To be honest, I actually got an “F” on “First Time Byte” but then I ran the test a second time and got an “A”. Another time I got a “B”.

It turns out that “First Time Byte” includes the time it takes to perform the DNS lookup. As I don’t wield much power over the DNS servers of the world, I ignored it when it came to my own server, but it would turn out to be quite the interesting thing when I went on to fix my C grades.

Compress images

This one was easy. WebPageTest complains if JPEGs are above 50% quality (with a 10% margin), so the image of my robot uploaded with my image uploader and re-sized on the client, was the culprit here.

I re-encoded it at 50% quality and updated my upload script to always use 0.5 as the quality constant.

Cache static content

Google doesn’t add any expiration headers to the Web Fonts CSS. I’m guessing that this is because they make changes to the underlying code that generates this CSS sometimes, and the want the changes to propagate as fast as possible.

I am however not that concerned about my fonts as I’ve already tested that they do what I want, so I would prefer them to be infinitely cached. So I thought about hosting them myself, as I already was doing with my symbol font (Elusive Icons).

At first I thought, well that will put more strain on my server, and it definitely goes against CDN best practices. But then I went ahead and checked my timeline readings at WebPageTest, and it turns out that even though the font data is transferred quickly from Googles servers, the DNS lookup for those where taking quite some time.

fonts.googleapis.com took roughly 300 ms to lookup, but that wasn’t the worst one.

themes.googleusercontent.com took more than a 1000 ms to resolve and that is the server that the font files are actually on.

Now this has to be taken with a grain of salt, just like the “First Byte Time” since DNS lookup could be really fast if the test was retaken. But it got me thinking.

No matter how you look at it, lofjard.se needs to be resolved, once, if you want to load this site. But then the resolved address is cached and every other subsequent request to lofjard.se will skip the DNS lookup step.

Hosting the web font CSS in my own CSS-file meant one less request, but hosting the web font files myself meant that the 6 font file requests where now directed at lofjard.se instead of another server. The question was if I could download the fonts from my own server in less time than I could DNS loopkup and download them from Google, even when taking a parallelization hit from the fact that most browsers only allow 4 simultaneous requests to the same server? The answer was YES.

My latest result from WebPageTest shows a much nicer score:

First Byte TimeA
Keep-alive EnabledA
Compress TransferA
Compress ImagesA
Cache static contentA
Effective use of CDNX

That dreadful “X”

I still get an “X” at CDN usage, and I’ve actually gotten worse at it since I now use two less CDNs than before. But it goes to show that while CDNs are great, they are not always the answer to performance.

by Mikael Lofjärd

Re-sizing Images in Javascript

Uploading files asynchronously with XMLHttpRequest is a neat trick on its own, but what I really wanted was a nice way to upload images from my phone and/or tablet.

The problem with this is that technology is evolving quite rapidly these days and my smartphone has an 8 megapixel camera. 8 megapixel pictures averages around 2.2 MB on my iPhone 5 and Chrome (and others) defaults file uploads with XMLHttpRequest to 1 MB. Now, one can easily find their way around such limitations but then you just run straight into the arms of the next thing limiting your upload sizes; the web server.

And even if you would change all the settings, to allow larger file uploads, you’d have to ask yourself if that’s what you really want. It wasn’t what I wanted.

Uploading photos from my phone is awesome since it’s really an internet connected camera, but when I’m on the go I don’t want to spend minutes uploading a file on a crappy cellphone connection. Besides, the largest size for images on my blog is 490 pixels, so what I wanted was to resize the images in the browser, before it’s sent to the server.

Once again, HTML5 comes to the rescue.

Stretched on a wooden frame

The HTML5 spec introduced a nifty little element called canvas. It’s actually pretty much the same as it’s old namesake used for oil paintings since the 15th century. It is a blank area ready to be filled with pixels and some nice Javascript APIs for getting them there (and off of there).

<input type="file" id="file-upload" />
<img src="img/placeholder.png" id="upload-preview" />

Using the File API we can make the img-tag load our image as soon as we’ve selected it.

var fileUpload = document.getElementById('file-upload');
var uploadPreview = document.getElementById('upload-preview');

fileUpload.addEventlistener('change', function () {
  var file = this.files[0];
  var reader = new FileReader();
  reader.onload = function(e) {
    uploadPreview.src = e.target.result;
}, false);

To play with the FileReader you need a bleeding edge browser such as Internet Explorer 10, Chrome 7 or Firefox 3.6 (!!!). =)

All jokes aside, Microsoft has been late to the game but at least they have arrived.

What good is a preview?

The preview image in itself isn’t helping us much with our client-side re-sizing, but this is where the canvas element comes in handy.

var canvas = document.createElement('canvas');
canvas.width = 640;
canvas.height = 480;
var ctx = canvas.getContext("2d");
ctx.drawImage(uploadPreview, 0, 0, 640, 480);

And voila! We have now redrawn our image on a canvas object at 640x480 resolution. In reality you’ll want to calculate the width and height from the original image to keep the aspect ratio, but for example purposes this will do.

Uploading from the canvas

So we have our image resized in a canvas element. Now how do we get it out? The canvas element has a method named toDataURL that can help us with that.

var jpeghigh = canvas.toDataURL("image/jpeg", 0.9);
var jpeglow = canvas.toDataURL("image/jpeg", 0.4);
var pngmedium = canvas.toDataURL("image/png", 0.6);

Here I’ve created 3 different images. Two JPEGs and one PNG. As you might have deduced the first argument to toDataURL is a mime type descriptor and the second is the quality.

DataURLs are nice but I want a binary object that I can send with my XMLHttpRequest.

function dataURItoBlob(dataURI, dataType) {
  var byteString = atob(dataURI.split(',')[1]);
  var ab = new ArrayBuffer(byteString.length);
  var ia = new Uint8Array(ab);
  for (var i = 0; i < byteString.length; i++) {
      ia[i] = byteString.charCodeAt(i);
  return new Blob([ab], { type: mimeType });

Before you go off to the comments, pitch forks in hand, shouting “ArrayBuffers are deprecated since Chrome v23!!”, have a little faith in me.

I am very well aware (and Chrome reminds me every time I upload a picture) but sending the ArrayBufferView UInt8Array directly to the Blog constructor (as one should do) works great on the desktop, but it doesn’t work on the iPhone. The above code works in both.

The catch

There always has to be one, doesn’t there?

The thing about the above code is that it doesn’t work all that well at all with the iPhone (or the iPad if you have one of those). The reason for this is iOS habit of not rotating photos when taken, but instead incorporate the Orientation flag in the EXIF metadata. This means that when you draw an image taken in portrait mode (on the iPhone, landscape mode in the iPad) on the canvas, it will be rotated 90 degrees. There is also an image squashing bug with files above a certain size but both of these are a subject for another time.

If you need a library that takes care of both of these issues I recommend that you check out the excellent canvasResize library by Göker Cebeci.

by Mikael Lofjärd

Async File Uploads in HTML5

Uploading files using HTML forms has always felt a bit off for me. You had to set your encoding to multipart/form-data and the synchronous nature of form posts always made it a waiting game when uploading larger files.

Then came AJAX and the dawn of “single page applications” and such buzzwords, but file uploads somehow got left behind. Javascript didn’t have the access it needed to send files with the XMLHttpRequest. Years passed and along came the File API and XMLHttpRequest Level 2 (it seems to be called that again) with its upload attribute, support for byte streams and progress events.

Today I’m going to show you how to build an asynchronous file uploader with it.

We’ll start with the HTML part:

<input type="file" id="files-upload" multiple />

<ul id="file-list"></ul>

There’s nothing weird going on here; just a regular file selector and a list of uploaded files. The multiple attribute is added so that we can upload more than one file at the time.

We want our files to upload as soon as they are selected so we hook up to the change event on our file selector input.

var filesUpload = document.getElementById('files-upload');
filesUpload.addEventListener("change", function () {
}, false);

The traverseFiles function checks for File API support and calls the upload function on each of the selected files:

function traverseFiles (files) {
  if (typeof(files) !== "undefined") {
    for (var i = 0; i < files.length; i++) {
  else {
    var fileList = document.getElementById('file-list');
    fileList.innerHTML = "No support for the File API in this web browser";


The upload function is where the actual uploading happens:

function upload(file) {
  var li = document.createElement("li"),
    div = document.createElement("div"),
    progressBarContainer = document.createElement("div"),
    progressBar = document.createElement("div"),
    fileList = document.getElementById('file-list'),

  progressBarContainer.className = "progress-bar-container";
  progressBar.className = "progress-bar";

  // add list item with progress bar

  // create the XMLHttpRequest object
  xhr = new XMLHttpRequest();
  // attach progress bar event
  xhr.upload.addEventListener("progress", function (e) {
    if (e.lengthComputable) {
      progressBar.style.width = (e.loaded / e.total) * 100 + "%";
  }, false);
  // attach ready state change event
  xhr.addEventListener("readystatechange", function () {
    if (xhr.readyState == 4) { // if we are done
      if (xhr.status == 200) { // and if we succeded
        progressBarContainer.className += " uploaded";
        progressBar.innerHTML = "Uploaded!";
        div.appendChild(document.createTextNode(file.name + '(' + file.size + ')'));
      } else {
        div.className += ' upload-error';
        progressBar.innerHTML = "Failed!";
  }, false);

  // open the request asynchronously
  xhr.open("post", "/upload", true);
  // set some request headers
  xhr.setRequestHeader("Content-Type", "multipart/form-data");
  xhr.setRequestHeader("X-File-Name", file.name);
  xhr.setRequestHeader("X-File-Size", file.size);
  xhr.setRequestHeader("X-File-Type", file.type);

  // send the file

Note how we bind to the progress event on xhr.upload. That is because the upload property is actually another type of XMLHttpRequest object that is similar but not the same as its parent. In this case, all you need to now is that all the events for the actual upload propagates on this object.

Receiving the file

The node.js side of things is not that hard either, assuming that the server below is forwarded to from the /upload URL that we posted to.

var http = require('http');
var fs = require('fs');

function onRequest(request, response) {

  var postData = '';
  request.addListener('data', function (postDataChunk) {
    postData += postDataChunk;

  request.addListener('end', function () {
    var fileName = request.headers['x-file-name'];

    fs.writeFile(fileName, postData, { encoding: 'binary' }, function(err) {
      if (err) {
        response.write("File write error.");
      } else {
        response.writeHead(200, 'application/json');


The important thing to note here is the request.setEncoding('binary');, at the beginning of the onRequest method, and the {encoding: 'binary'} option sent to fs.writeFile().

Of course, in a real world situation you might want some more logic in place to ward of misuse from malicious people or control what type of files get uploaded. In my case I added some logic for making sure (it’s in the admin section of my blog so I don’t have to be 100% sure) that I only upload images.

I also do some nifty re-sizing of my images before they are uploaded which takes care of long upload times and default size limits for XMLHttpRequests (Chrome defaults to 1 MB) but that is the subject for another time.

by Mikael Lofjärd

Search And You Shall Find

Every blog should have a search box. Not because it’s necessary, but because it’s fun to implement.

A few weeks ago I ran across a small Javascript library called Lunr.js. It’s basically a small text indexer that can rank search results and it’s written entirely in Javascript, just the way I like it.

Setting up an index is really easy:

var searchIndex = lunr(function () {
  this.field('title', { boost: 10 })

Then you just add some documents to the index:

var doc = {
  id: "a-blog-post-title",
  title: "A Blog Post Title",
  content: "This is a crummy example blog post...",

Then you can search for it by simply calling searchIndex.search('crummy blog'); and that will return an array of objects with the properties ref and score.

ref is the id property of the indexed document and score is, well, how well it scored. The array will be sorted with the highest scoring result first in the array.

If you want you can supply a word list to the search index with words that should not be counted, but by default it has a list of common English words such as ‘and’ and ‘if’ so that they won’t be ranked and affect performance.

Overall, I’m very happy with it. I created the SearchManager and connected it to the CacheManager so that it rebuilds the index if a new post is created or one is edited. I also configured the CacheManager so that it does not cache URLs starting with /search. That way I won’t fill up my cache store with all different search results.

So, if browsing the archive isn’t your cup of tea, then get searching folks!

by Mikael Lofjärd

Progress at Last

Sometimes you need more than your operating system gives you.

That’s when a text editor comes in handy.



if [ $# -ne $EXPECTED_ARGS ]
  echo "Usage: `basename $0` {source} {dest}"
  exit $E_BADARGS

if [[ ! -f "$1" ]]; then
	echo "Source file does not exist or is not a regular file."
	exit $E_BADPATH

DESTSIZE=`du -b "$1" | awk '{print \$1; }'`

DESTFILENAME=`basename "$1"`

if [[ -d "$2" ]]; then
	DESTDIR=`dirname "$2"`
	if [[ ! -d "$DESTDIR" ]]; then
		echo "Dest dir does not exist."
		exit $E_BADPATH

cat "$1" | pv -s $DESTSIZE -p -e -r > "$DESTPATH"

exit 0

Copying large files to my NAS becomes so much more fun when I actually KNOW that it’s doing what it should. Progress bars FTW!

UPDATE: Now it’s actually working for more than one case. :)

by Mikael Lofjärd

LESS Is More, More Or Less

A while back I read a blog post somewhere about how the LESS parser/compiler had been remade in Javascript.

“Well awesome”, I thought to myself as I had been wanting some more flexibility in CSS but had been to stubborn/proud to install the SASS compiler since it’s written in Ruby. Needles to say, I wanted to incorporate it in my blog as soon as possible but I’ve not had the time to actually do it until now.

LESS you say?

LESS (like SASS) is a CSS derived language that adds a whole lot of long needed features to CSS to ease maintenance of large style sheets. It compiles into regular CSS markup either in realtime (through their nifty Javascript implementation in the browser) or, as in my case, as a bootstrapping task when I start my blog.

For now, it’s tacked on in a kind of ugly way in my BundleController, but I might redo the actual code some day since I’m not pleased with it. It works though so for now it will have to suffice.

What does if bring to the table?

It brings variables (!!!). Finally you can stop sprinkling your CSS files with color descriptions and font sizes:

@white: #fff;

.myClass {
  background-color: @white;

It’s as easy as that.

It also brings something called mixins, which is kind of like multiple inheritance:

.basefont(@size: 12px) {
  font-family: Arimo, sans-serif;
  font-size: @size;

body {

h1 {

Mixins can be quite useful in cutting down on repetitive CSS code, and it has support for parameters and default values.

LESS also lets you nest your rules to reduce repeating your selectors:

div.sitewrapper {
  >nav {
    ul {
      margin: 7px 0px 2px 5px;
      li {
        margin: 3px 0 2px 0;
        a {
          width: 70px;

When this is compiled it is turned into:

div.sitewrapper > nav ul {
  margin: 7px 0px 2px 5px;

div.sitewrapper > nav ul li {
  margin: 3px 0 2px 0;

div.sitewrapper > nav ul li a {
  width: 70px;

There is also support for a lot more complex nesting rules and a calculation API so that you can calculate colors and distances and make relative offsets and such. I recommend you to read up on http://lesscss.org/ about all the cool features of less.

So does it make your life easier?

It’s kind of hard to say since I just got it running and my style sheet probably needs more work done on it, but so far I’ve been able to cut around 50 lines of CSS out of my ~660 line file, and it has gotten a lot less repetitive and a lot more easier to read I think.

It’s not deployed yet but when I deploy it this weekend I will let you be the judges as the source code is made available as usual.

by Mikael Lofjärd

Cache Me If You Can

Today at work was “do-anything-but-work-day”. It’s a bit like Googles 20%, but instead of 20% it’s more like .0001% .8% or something like that. It’s was our first time and not that many people had a clear idea about what to do at first. I on the other hand had a mission all planned out.

The Performance Degradation

When I put the blog on the new server back in January, I noticed a small decrease in performance. After a few tests I’ve realized that the CPU is the culprit.

The Atom D525, while dual-core, at 1.6 GHz has roughly half the computational power of the Pentium M at 1.5 GHz, which was what my old server had under the hood.

Node.js can make use to multi-core processors by starting more instances of itself, which made concurrent connections on the new server almost the same speed as on the old server. However, concurrent connections isn’t really my problem since I only have around 30 readers on a good day.

What’s Taking You So Long?

Well even in the old version of the blog, I do a lot of caching. My node-static instance takes care of all static file handling and it does a really good job of caching them. I also cache all of my mustache templates when I start Node.js so I can read them from memory every time I render a page.

What was taking so long was actually more than one thing.

First there was database access. CouchDB is really fast and caches a lot, but its only way of communicating is REST over HTTP so there’s still some overhead getting to those cached results.

And then there was presentation logic. The actual rendering of the data on to the template takes a few milliseconds and all pages with source code on them take a few milliseconds more to render all the syntax highlighting server-side. Sometimes there’s a lot of RegExp running to make it all happen.

The Mission

This brings us back to today and my mission; to build an in memory cache for caching web server responses.

My plan was to build a cache that stored the entire HTTP response (headers and content) and that I could clear selectively when needed. This lead me to remove my multi-core code and run Node.js as a single process, since otherwise I would have been in another world of hurt trying to get my processes to sync there cache stores.

When a new post is added I want to clear most of the cache (list pages, archive, atomfeed etc) but not the post pages, and when a comment is added to a post I just want to clear the list pages and the post page for that post. So I added a few different cache stores that I could clear out as I wanted to.

Most of this is handled by the CacheManager.

 *   Cache Manager
 *   Author:  mikael.lofjard@gmail.com
 *   Website: http://lofjard.se
 *   License: MIT License
var CacheManager = (function () {

  var fs = require('fs');

  var env = require('./environmentManager').EnvironmentManager;
  var misc = require('./misc').Misc;

  var cacheStore = {};

  function getStoreType(url)
    var urlParts = url.split('/');
    var result = 'dynamic';

    switch (urlParts[1]) {
      case 'source':
      case 'about':
        result = 'static';
      case 'archive':
      case 'atomfeed':
        result = 'semiStatic';
      case 'tags':
      case 'tag':
        result = 'semiDynamic';
      case 'post':
        result = 'floating';

    return result;

  return {

    init: function () {
      cacheStore = {};
      cacheStore.static = {};         // static between boots       - /source /about
      cacheStore.semiStatic = {};     // clear on new post          - /archive /atomfeed
      cacheStore.semiDynamic = {};    // clear on edit post         - /tags /tag
      cacheStore.dynamic = {};        // clear on comment (default) - / /page
      cacheStore.floating = {};       // null item on comment       - /post

    clearOnNewPost: function () {
      env.info('CacheManager: Clearing cache on new post');
      cacheStore.semiStatic = {};
      cacheStore.semiDynamic = {};
      cacheStore.dynamic = {};

    clearOnEditPost: function (url) {
      env.info('CacheManager: Clearing cache on edit for ' + url);
      cacheStore.semiDynamic = {};
      cacheStore.dynamic = {};


    clearOnNewComment: function (url) {
      env.info('CacheManager: Clearing cache on comment for ' + url);
      cacheStore.dynamic = {};

    cache: function (url, headerData, contentData) {
      env.info('CacheManager: Caching content for ' + url);
      cacheStore[getStoreType(url)][url] = { content: contentData, headers: headerData };

    fetch: function (url) {
      var data = cacheStore[getStoreType(url)][url];

      if (typeof(data) != 'undefined') {
        env.info('CacheManager: Found cached entry for ' + url);
        return data;

      return null;
typeof(exports) != 'undefined' ? exports.CacheManager = CacheManager : null;

Hooking It Up

Previously my main workflow looked something like this; The Router looked at the request, called the assigned Controller which fetched data, formed the data into a model and passed the model to the ViewManager which rendered the result to the response stream.

Hooking up the CacheManager meant that I had to get some parts a little “dirtier” than I wanted, but instead of putting a lot of code into the ViewManager, I created the ResponseManager.

 *   Response Manager
 *   Author:  mikael.lofjard@gmail.com
 *   Website: http://lofjard.se
 *   License: MIT License
var ResponseManager = (function () {

  var env = require('./environmentManager').EnvironmentManager;
  var cm = require('./cacheManager').CacheManager;

  var misc = require('./misc').Misc;

  return {

    writeCachedResponse : function (response, cachedUrl) {
      env.info('ResponseManager: Writing cached view for ' + cachedUrl);

      var data = cm.fetch(cachedUrl);

      response.writeHead(200, data.headers);
      response.write(data.content, 'utf-8');

    writeResponse : function (request, response, responseData, doNotCache) {
      var pathName = misc.getPathName(request.url);

      if (typeof(doNotCache) == 'undefined') {
        cm.cache(pathName, responseData.headers, responseData.content)

      response.writeHead(200, responseData.headers);
      response.write(responseData.content, 'utf-8');
typeof(exports) != 'undefined' ? exports.ResponseManager = ResponseManager : null;

The ResponseManager does most of the talking with the CacheManager and I remade the ViewManager so that the renderView() method now returns the rendered response instead of writing it to the response stream. This lets the Controllers do the job of rendering through the ViewManager and then passing the result to the ResponseManager.

The other part of the equation is the Router. I didn’t really want to put CacheManager calls into the router but it is the first place that has a good path to use as key, so for now the Router checks for the existence of a cached response and, if found, sends it to the ResponseManager before even starting to look up what Controller to call if no cached response is found.

Show Me The Money

So what kind of a performance boost are we talking about?

Well, using the handy ab (Apache Benchmark) I sent a thousand requests to my different implementations:


Requests per second: 5.41 (mean)
Time per request: 184.761 ms (mean)
Transfer rate: 177.92 Kbytes/sec recieved


Requests per second: 62.86 (mean)
Time per request: 15.910 ms (mean)
Transfer rate: 2097.06 Kbytes/sec recieved

That’s quite some increase in performance. So much, in fact, that the transfer rate exceeds my measly 10 Mbit/s outgoing fiber connection. At least now, if my blog was to slow down I know it’s not the servers fault.

Just for kicks I benchmarked it running on my laptop with a Core 2 Duo at 2.0 GHz and the results point to some possible areas of improvement for Intel on the Atom line (mainly memory access speed):

Cached (on my workstation)

Requests per second: 213.98 (mean)
Time per request: 4.673 ms (mean)
Transfer rate: 7030.95 Kbytes/sec recieved

Luckily I don’t have enough traffic to warrant an upgrade to my fiber connection. 100/100 Mbit/s costs almost twice as much as my 100/10 Mbit/s.

by Mikael Lofjärd

Content-Length - A HTTP Header/UTF-8 Story

The HTTP protocol has a lot of header fields that affects requests and responses. HTTP also have a couple of different request types (HEAD, GET, POST, PUT and DELETE). Unless you’re building a REST service, you mostly have to deal with GET and POST on the web, and I don’t even differentiate those as much as I should.

A couple of weeks ago, a thought occurred to me; “What happens when I make a HEAD request to my blog?“. Well the answer turned out to be pretty simple. Node.js ignores any calls made to the write method of the response object if the request was a HEAD request.

That’s all fine with me, but then I started thinking about what type of things should go into a HEAD response and if I could optimize anything. This lead me to look closer into the Content-Length header field.

The Content-Length header field describes the length of the content to come (well duh), in octets (8-bit bytes). In most scenarios there’s really no need for it with the high speed internet connections of today, but there still exists a frontier where one might get a slow connection and could appreciate if the browser could reliably tell how much of the site is still to be loaded; mobile.

The Unicode Problem

So I decided that I wanted my HTTP headers to have the Content-Length field in them. My template engine (Mustache) gives me the entire string to render to the response stream so I figured I could use the string length as my value for Content-Lenght. That turned out to be a erroneous assumption. All of my files and my database contains UTF-8 encoded data, as it should. The problem is that this means that string length is not equal to the byte length anymore since some characters (like the ä in my last name) is more than 8 bits long.

A quick “google” later and it turns out that there isn’t much native support in JavaScript for calculating the byte length of a UTF-8 string. Luckily for me, someone smarter had made a function that fit the bill (credits to Mike Samuel).

function lengthInUtf8Bytes(str) {
  // Matches only the 10.. bytes that are non-initial characters in a multi-byte sequence.
  var m = encodeURIComponent(str).match(/%[89ABab]/g);
  return str.length + (m ? m.length : 0);

Updating the ViewManager

Now all I needed was to insert this into my ViewManager.js and now my responses include Content-Length.

 *   View Manager
 *   Author:  mikael.lofjard@gmail.com
 *   Website: http://lofjard.se
 *   License: MIT License

var ViewManager = (function () {

  var fs = require('fs');
  var path = require('path');
  var mu = require('mustache');

  var env = require('./environmentManager').EnvironmentManager;

  var templatesCached = false;
  var templates = {};
  var partials = {};

  function lengthInUtf8Bytes(str) {
    // Matches only the 10.. bytes that are non-initial characters in a multi-byte sequence.
    var m = encodeURIComponent(str).match(/%[89ABab]/g);
    return str.length + (m ? m.length : 0);

  return {

    init : function (templatePath) {

      if (!templatesCached) {
        console.log('ViewManager: Populating template cache');
        templates = {};
        var allTemplateFiles = fs.readdirSync(templatePath);

        for (var file in allTemplateFiles) {
          console.log(' - Adding ' + allTemplateFiles[file] + ' to template store');
          var filePath = path.resolve(templatePath, allTemplateFiles[file]);
          templates[allTemplateFiles[file]] = fs.readFileSync(filePath, 'utf-8');
        partials.header = templates['header.partial.mu'];
        partials.footer = templates['footer.partial.mu'];

        templatesCached = true;

    renderView : function (response, viewName, model, contentType) {
      if (typeof(model.header) == 'undefined') {
        model.header = {};

      contentType = contentType || 'text/html';

      model.header.trackingCode = env.trackingCode();
      model.footer = {};

      env.info('ViewManager: Rendering view ' + viewName);
      var html = mu.to_html(templates[viewName + '.mu'], model, partials);

      var contentLength = lengthInUtf8Bytes(html);

      var headers = {
        "Content-Type" : contentType + ';charset=utf-8',
        "Content-Length" : contentLength

      response.writeHead(200, headers);
      response.write(html, 'utf-8');

typeof(exports) != 'undefined' ? exports.ViewManager = ViewManager : null;

I’m planning on looking more into the different type of cache header fields sometime also, but for now I’m happy letting node-static take care of all the caching of my static content, and just not have any caching at all done on the dynamic content. After all, I’ve only got some 20 readers on this blog and I don’t really “feel the pressure” yet.

by Mikael Lofjärd

Sorry, sharing is not available as a feature in your browser.

You can share the link to the page if you want!