Mikael's blog

A developers seventh time trying to maintain a blog

Re-sizing Images in Javascript

Uploading files asynchronously with XMLHttpRequest is a neat trick on its own, but what I really wanted was a nice way to upload images from my phone and/or tablet.

The problem with this is that technology is evolving quite rapidly these days and my smartphone has an 8 megapixel camera. 8 megapixel pictures averages around 2.2 MB on my iPhone 5 and Chrome (and others) defaults file uploads with XMLHttpRequest to 1 MB. Now, one can easily find their way around such limitations but then you just run straight into the arms of the next thing limiting your upload sizes; the web server.

And even if you would change all the settings, to allow larger file uploads, you’d have to ask yourself if that’s what you really want. It wasn’t what I wanted.

Uploading photos from my phone is awesome since it’s really an internet connected camera, but when I’m on the go I don’t want to spend minutes uploading a file on a crappy cellphone connection. Besides, the largest size for images on my blog is 490 pixels, so what I wanted was to resize the images in the browser, before it’s sent to the server.

Once again, HTML5 comes to the rescue.

Stretched on a wooden frame

The HTML5 spec introduced a nifty little element called canvas. It’s actually pretty much the same as it’s old namesake used for oil paintings since the 15th century. It is a blank area ready to be filled with pixels and some nice Javascript APIs for getting them there (and off of there).

<input type="file" id="file-upload" />
<img src="img/placeholder.png" id="upload-preview" />

Using the File API we can make the img-tag load our image as soon as we’ve selected it.

var fileUpload = document.getElementById('file-upload');
var uploadPreview = document.getElementById('upload-preview');

fileUpload.addEventlistener('change', function () {
  var file = this.files[0];
  var reader = new FileReader();
  reader.onload = function(e) {
    uploadPreview.src = e.target.result;
  };
  reader.readAsDataURL(file);
}, false);

To play with the FileReader you need a bleeding edge browser such as Internet Explorer 10, Chrome 7 or Firefox 3.6 (!!!). =)

All jokes aside, Microsoft has been late to the game but at least they have arrived.

What good is a preview?

The preview image in itself isn’t helping us much with our client-side re-sizing, but this is where the canvas element comes in handy.

var canvas = document.createElement('canvas');
canvas.width = 640;
canvas.height = 480;
var ctx = canvas.getContext("2d");
ctx.drawImage(uploadPreview, 0, 0, 640, 480);

And voila! We have now redrawn our image on a canvas object at 640x480 resolution. In reality you’ll want to calculate the width and height from the original image to keep the aspect ratio, but for example purposes this will do.

Uploading from the canvas

So we have our image resized in a canvas element. Now how do we get it out? The canvas element has a method named toDataURL that can help us with that.

var jpeghigh = canvas.toDataURL("image/jpeg", 0.9);
var jpeglow = canvas.toDataURL("image/jpeg", 0.4);
var pngmedium = canvas.toDataURL("image/png", 0.6);

Here I’ve created 3 different images. Two JPEGs and one PNG. As you might have deduced the first argument to toDataURL is a mime type descriptor and the second is the quality.

DataURLs are nice but I want a binary object that I can send with my XMLHttpRequest.

function dataURItoBlob(dataURI, dataType) {
  var byteString = atob(dataURI.split(',')[1]);
  var ab = new ArrayBuffer(byteString.length);
  var ia = new Uint8Array(ab);
  for (var i = 0; i < byteString.length; i++) {
      ia[i] = byteString.charCodeAt(i);
  }
  return new Blob([ab], { type: mimeType });
}

Before you go off to the comments, pitch forks in hand, shouting “ArrayBuffers are deprecated since Chrome v23!!”, have a little faith in me.

I am very well aware (and Chrome reminds me every time I upload a picture) but sending the ArrayBufferView UInt8Array directly to the Blog constructor (as one should do) works great on the desktop, but it doesn’t work on the iPhone. The above code works in both.

The catch

There always has to be one, doesn’t there?

The thing about the above code is that it doesn’t work all that well at all with the iPhone (or the iPad if you have one of those). The reason for this is iOS habit of not rotating photos when taken, but instead incorporate the Orientation flag in the EXIF metadata. This means that when you draw an image taken in portrait mode (on the iPhone, landscape mode in the iPad) on the canvas, it will be rotated 90 degrees. There is also an image squashing bug with files above a certain size but both of these are a subject for another time.

If you need a library that takes care of both of these issues I recommend that you check out the excellent canvasResize library by Göker Cebeci.

by Mikael Lofjärd

Async File Uploads in HTML5

Uploading files using HTML forms has always felt a bit off for me. You had to set your encoding to multipart/form-data and the synchronous nature of form posts always made it a waiting game when uploading larger files.

Then came AJAX and the dawn of “single page applications” and such buzzwords, but file uploads somehow got left behind. Javascript didn’t have the access it needed to send files with the XMLHttpRequest. Years passed and along came the File API and XMLHttpRequest Level 2 (it seems to be called that again) with its upload attribute, support for byte streams and progress events.

Today I’m going to show you how to build an asynchronous file uploader with it.

We’ll start with the HTML part:

<input type="file" id="files-upload" multiple />

<ul id="file-list"></ul>

There’s nothing weird going on here; just a regular file selector and a list of uploaded files. The multiple attribute is added so that we can upload more than one file at the time.

We want our files to upload as soon as they are selected so we hook up to the change event on our file selector input.

var filesUpload = document.getElementById('files-upload');
filesUpload.addEventListener("change", function () {
  traverseFiles(this.files);
}, false);

The traverseFiles function checks for File API support and calls the upload function on each of the selected files:

function traverseFiles (files) {
  if (typeof(files) !== "undefined") {
    for (var i = 0; i < files.length; i++) {
      upload(files[i]);
    }
  }
  else {
    var fileList = document.getElementById('file-list');
    fileList.innerHTML = "No support for the File API in this web browser";
  } 
}

Uploading

The upload function is where the actual uploading happens:

function upload(file) {
  var li = document.createElement("li"),
    div = document.createElement("div"),
    progressBarContainer = document.createElement("div"),
    progressBar = document.createElement("div"),
    fileList = document.getElementById('file-list'),
    xhr;

  progressBarContainer.className = "progress-bar-container";
  progressBar.className = "progress-bar";
  progressBarContainer.appendChild(progressBar);
  div.appendChild(progressBarContainer);
  li.appendChild(div);

  // add list item with progress bar
  fileList.appendChild(li);

  // create the XMLHttpRequest object
  xhr = new XMLHttpRequest();
  
  // attach progress bar event
  xhr.upload.addEventListener("progress", function (e) {
    if (e.lengthComputable) {
      progressBar.style.width = (e.loaded / e.total) * 100 + "%";
    }
  }, false);
  
  // attach ready state change event
  xhr.addEventListener("readystatechange", function () {
    if (xhr.readyState == 4) { // if we are done
      if (xhr.status == 200) { // and if we succeded
        progressBarContainer.className += " uploaded";
        progressBar.innerHTML = "Uploaded!";
        
        div.appendChild(document.createTextNode(file.name + '(' + file.size + ')'));
      } else {
        div.className += ' upload-error';
        progressBar.innerHTML = "Failed!";
        div.appendChild(document.createTextNode(file.name));
      }
    }
  }, false);

  // open the request asynchronously
  xhr.open("post", "/upload", true);
      
  // set some request headers
  xhr.setRequestHeader("Content-Type", "multipart/form-data");
  xhr.setRequestHeader("X-File-Name", file.name);
  xhr.setRequestHeader("X-File-Size", file.size);
  xhr.setRequestHeader("X-File-Type", file.type);

  // send the file
  xhr.send(file);
}

Note how we bind to the progress event on xhr.upload. That is because the upload property is actually another type of XMLHttpRequest object that is similar but not the same as its parent. In this case, all you need to now is that all the events for the actual upload propagates on this object.

Receiving the file

The node.js side of things is not that hard either, assuming that the server below is forwarded to from the /upload URL that we posted to.

var http = require('http');
var fs = require('fs');

function onRequest(request, response) {
  request.setEncoding('binary');

  var postData = '';
  request.addListener('data', function (postDataChunk) {
    postData += postDataChunk;
  });

  request.addListener('end', function () {
    var fileName = request.headers['x-file-name'];

    fs.writeFile(fileName, postData, { encoding: 'binary' }, function(err) {
      if (err) {
        response.writeHead(500);
        response.write("File write error.");
        response.end();
      } else {
        response.writeHead(200, 'application/json');
        response.write("Success!");
        response.end();
      }
    })
  });
}

http.createServer(onRequest).listen(80);

The important thing to note here is the request.setEncoding('binary');, at the beginning of the onRequest method, and the {encoding: 'binary'} option sent to fs.writeFile().

Of course, in a real world situation you might want some more logic in place to ward of misuse from malicious people or control what type of files get uploaded. In my case I added some logic for making sure (it’s in the admin section of my blog so I don’t have to be 100% sure) that I only upload images.

I also do some nifty re-sizing of my images before they are uploaded which takes care of long upload times and default size limits for XMLHttpRequests (Chrome defaults to 1 MB) but that is the subject for another time.

by Mikael Lofjärd

Pictures!

I’ve been hacking away at the blog again and made a little nifty photo uploader for the admin area. It resizes the image on the client (browser) before uploading it with Ajax which is great because now I can upload directly from my phone without worrying about file size.

The uploader really needs its own blog post but I’m heading out so in the meanwhile, here is a sneak preview of things to come!

Robotics

Oh, that’s right; inline pictures are working as well.

Remember to refresh your browser cache if stuff looks weird.

by Mikael Lofjärd

Search And You Shall Find

Every blog should have a search box. Not because it’s necessary, but because it’s fun to implement.

A few weeks ago I ran across a small Javascript library called Lunr.js. It’s basically a small text indexer that can rank search results and it’s written entirely in Javascript, just the way I like it.

Setting up an index is really easy:

var searchIndex = lunr(function () {
  this.field('title', { boost: 10 })
  this.field('content')
});

Then you just add some documents to the index:

var doc = {
  id: "a-blog-post-title",
  title: "A Blog Post Title",
  content: "This is a crummy example blog post...",
};
searchIndex.add(doc);

Then you can search for it by simply calling searchIndex.search('crummy blog'); and that will return an array of objects with the properties ref and score.

ref is the id property of the indexed document and score is, well, how well it scored. The array will be sorted with the highest scoring result first in the array.

If you want you can supply a word list to the search index with words that should not be counted, but by default it has a list of common English words such as ‘and’ and ‘if’ so that they won’t be ranked and affect performance.

Overall, I’m very happy with it. I created the SearchManager and connected it to the CacheManager so that it rebuilds the index if a new post is created or one is edited. I also configured the CacheManager so that it does not cache URLs starting with /search. That way I won’t fill up my cache store with all different search results.

So, if browsing the archive isn’t your cup of tea, then get searching folks!

by Mikael Lofjärd

New Drapes

I got tired of the old dark green design. It was too murky and spring is in the air, so I redesigned the site to be lighter and more “spring-y”.

Old friends

var awesome = "Inconsolata is back!";

Inconsolata is back as the source code font and Open Sans makes up the majority of the text on the blog now with the exception of headers which are set in Racing Sans One.

All-in-all, I’m happier. And hopefully this can trigger my implementation of multiple pictures per post since I need that to report on my robot project progress. Summer is coming and summer means sun, and sun means solar power which in turn means that I need to speed things up if I want this project done before the (OMG spoiler alert!) new baby in the end of July.

by Mikael Lofjärd

4 Hours of Markdown

Wow, that was kind of exhausting.

I’ve completed my rewrite of all my blog posts into Markdown. Somewhere in the middle I found why my inline HTML didn’t work and that made my old posts look almost acceptable, but the syntax highlighted source code didn’t work anymore since I had moved the highlighting code into the marked configuration.

So on I went into the abyss, continuing to rewrite (more like edit) all my older posts. While I was at it I re-indented all source code examples into using 2-space indentation. Man, some posts do have a lot of source code in them. =)

I made good use of SSH for connecting to my server from my parents-in-laws’ cabin (where I’ve spent the last week).

ssh -L 8080:localhost:5984 lofjard.se made sure I could connect to the CouchDB instance on lofjard.se.

tl;dr

  • All posts are now written in Markdown. You will not know the difference.
  • SSH is awesome.
by Mikael Lofjärd

Marked Up With Markdown

Desperate, as always, for lowering my blogging threshold I implemented Markdown syntax for the blog.

As I did this through use of the marked plugin I lost support for inline HTML (might be a setting though). For a few hours this means that my old blog posts will look ugly until I’ve gone through the backlog and converted it to Markdown.

by Mikael Lofjärd

Group Pressure

In a totally unoriginal move I’m now starting to wean myself off Facebook, starting by deleting the Facebook app from my phone and tablet.

Since almost everyone else in the tech sector seems to have done the same thing at least once, I don’t feel that I have to post any particular reason for doing so other than being tired of it.

Since I’m not going cold turkey right away I will still check in on Facebook from my laptop, but the number of “feed reads” will hopefully be few and far between.

by Mikael Lofjärd

OMG New Certificates

One might wonder where I’ve been or what I’ve been up to these last couple of months. One would be right to wonder why I, so close to the one year anniversary of this blog, suddenly stopped writing.
The truth is kind of embarassing.

What did you do?

I created a bunch of SSL certificates that were set to expire in a year (by not changing the default value).
So here I was, wanting to write to you about building a robot, going to FOSDEM, hacking away at my blog etc, but I just couldn’t. Well not in any easy way anyway.

But isn’t that an easy fix?

My first thought was to generate new client certificates but that was when I realised that my server certificates had expired as well and my CA certificate had been lost in the maintenance work a few months back.

So my next quest was to recreate my CA certificates but that unfortunately turned out to take longer than I expected.
Appearantly I’m a very busy man these days because every time I sat down and got started with the certificates, something else came up and got in the way of things.

So what’s different about today

Well for one, I’m home from work tending to my daughter who has a high fever. After a morning of cartoons and blaming me for “stealing” her toy train, she finally fell asleep and lo’ and behold; I have brand new CA certificates, server certificates and client certificates.

I’m back!

Oh, and dont worry! There will be a few posts about my robot.

by Mikael Lofjärd

The Fabulous Pi

In March this year I took a great leap forward in performance when I built my in-memory cache. It took my blog from a paltry 5 requests per second to a whooping 62 requests per second.

Well, since then I’ve made some changes…

The Replacement

Since you already know The Plan, let’s go ahead and talk a little about what the temp agency sent over; The Raspberry Pi.

The Raspberry Pi is a $35 computer the size of a credit card. It comes with 2 USB ports, HDMI, 100 Mbit ethernet, a SD card reader for storage and it’s powered by a cellphone charger. It’s also quite similiar to the budget smartphone innards of yesteryear.

The Raspberry Pi packs a 700 MHz ARM 11 processor and 256 MBytes of RAM. It also has a pretty powerful GPU but I haven’t used it yet so I’ll leave it at that.

Having no other computer at arms length I grabbed the Raspberry Pi from a shelf and set it up as a temporary blog server.

So how did it fare?

Raspberry Pi with nginx

Requests per second: 44.36 (mean)
Time per request: 22.543 ms (mean)
Transfer rate: 554.02 Kbytes/sec received

Well that’s not too shabby! In fact you might not even have noticed that the blog has been served up by the Raspberry Pi for the last two days.

It’s worth mentioning that a full page render without the in-memory cache (cold start) took about 2-3 seconds to run though.

Starting fresh

Having a (miniscule) replacement up and running allowed me to move IBS from the closet to my desk for some long needed extreme makeover.

This time around I’m running Arch Linux on it (same as on my laptop) and I’m hoping that will make it easier to maintain it, as I’ll only have one distribution to keep up with.

So what good was all this?

Request per second: 163.96 (mean)
Time per request: 6.099 ms (mean)
Transfer rate: 2051.47 Kbytes/sec recieved

Wow, it more than doubled the performance!

Now you might be wondering about the transfer rate being about the same as before (and much lower than it should be on the Raspberry Pi) and that’s because everything now passes through nginx and gets compressed with gzip.

I’ve also rewritten the code so that nginx now takes care of all the static content, SSL configuration and all virtual domains. That’s a huge load off the code base, and even though I liked coding those parts, they weren’t that much fun to maintain and (as proven) weren’t all that efficient either.

Showing off

Just to put some perspective on how bloody fast my fantastic laptop is, here’s the blog running on jackrabbit:

Requests per second: 1204.22 (mean)
Time per request: 0.830 ms (mean)
Transfer rate: 32266.94 Kbytes/sec received

Now that would put some serious strain on my internet connection. =)

by Mikael Lofjärd

Sorry, sharing is not available as a feature in your browser.

You can share the link to the page if you want!

URL