Mikael's blog

A developers seventh time trying to maintain a blog

Oh Crap!

I just realized that comments had stopped working. I had forgotten to upload the new JavaScript file with the SyntaxHighlighter code removed after I implemented it server-side. Silly me.

Comments are now working again.

by Mikael Lofjärd

WiFi Channel 13 and Android 2.x

A friend of mine told me about how a program called inSSIDer helped him get better performance from his wireless network at home. A few days ago I decided to try it out.

inSSIDer is this nifty little profiling tool that allows you to see visually all the wireless networks in your vicinity, what their signal strengths are and what channels they’re occupying.

It turns out I have a 14(!) networks interfering with mine. The best way to minimize interference is of course to use another channel. The problem is that 802.11g/n networks use approx. 20 MHz of bandwith and the 13 channels allowed (only 11 in the States/Canada) are spaced only 5 MHz apart.

802.11b networks used approx. 30 MHz of bandwidth which in turn made room for only 3 non-overlapping channels in the spectrum (1, 6 and 11). There aren’t that many 802.11b devices left nowadays so in reality we could have 4 non-overlapping channels in europe (1, 5, 9 and 13) but all networking hardware is made in Asia and made for the world market so most wireless routers come with a default to channel 1, 6 or 11.

This turned out to be the case in my neighbourhood to. 4 networks fought over channel 1, 8 networks (including mine) fought over channel 6 and one luncky bastard had channel 11 all to himself. I decided to use channel 13 since it had the least amount of overlap with anyone.

Now normally all your devices just switch to the new channel automatically, but for some reason my Android phone couldn’t find my network anymore.

A quick google lookup the next day led me a “hidden” settings menu.
Settings -> Wireless & network settings -> Wi-Fi settings -> Press menu-button -> Advanced -> Regulatory domain -> 13 channels

I don’t know if it was set to 11 channels for international compliance or if it was my CyanogenMod 7.1 installation that set it to 11 channels by default, but now it works like a charm on channel 13.

by Mikael Lofjärd

Client Certificates on Android

Today I stayed at home with a seriously soar throat and a mild but annoying headache. Unable to sleep through the day I set upon myself the task of creating an administration interface for the blog.

One of the reasons I had for building my own blog engine was to make it easy for me to post from my Android tablet. It’s been pretty easy to write new posts on my computer at home using the CouchDB administration interface, but I don’t want that exposed to the Internet.

I thought for a while about building a classic username/password login with sessions and all the usual stuff but to do that I wanted to have it transfer credentials over HTTPS. That meant creating SSL keys and self-signing a certificate. No problem there since my server runs Linux and all. But when reading up on TLS/SSL and server certificates I drifted into the client certificate jungle and realized that what I really wanted was for me to be able to post blogs without the use of a username and password.

Node.js has a great API for making use of client certificates:

var https = require("https");
var fs = require("fs");

function onRequest(request, response) {
  response.writeHead(200);
  response.end("Authorized : " + request.connection.authorized);
}

var options = {
  key: fs.readFileSync('certs/server.key'),	// server key
  cert: fs.readFileSync('certs/lofjard.crt'),	// server certificate
  ca: fs.readFileSync('certs/lofjard.crt'),	// client certificate (or CA)
  requestCert: true
}

https.createServer(options, onRequest).listen(443);

The problem that annoyed me for hours is the fact that Android does NOT support client certificates. In fact there’s a whole lot of things Android doesn’t do when it comes to certificates.

I even went so far as to extract the default CA certificate key store from my rooted HTC Desire, editing it with a little java program I found in a dark corner of the Internet and then booting my phone into recovery mode, remounting a few partitions, moving some files around and then rebooting it again, only to find out that it had no effect.

Apparently Android 4.0 is going to solve all this (and world hunger) but for now I’m stuck either writing on my iPhone or write in Evernote on my tablet, sync to Evernote on my iPhone and then upload it. Ice Cream Sandwich could not come any sooner if you ask me. :)

by Mikael Lofjärd

The power of CommonJS

A few days ago when I was putting my posts into CouchDB, instead of relying on a static HTML file, I also implemented templating with Mustache. Mustache is small, easy to use and has almost no advanced features. It’s power lies in the vast amount of implementations it has for different platforms but most of the power comes from one single implementation; the JavaScript implementation.

There’s nothing really special about the JavaScript implementation except that it is written in JavaScript and is intended to run in the web browser. The implication of having this implementation however is that you can now use the same template files on both the server side and client side. That was a big win for me so I decided to use it.

There is also, on their website, a link to a node.js implementation, and that’s what I was planning to use on my server.

After a few minutes of trying to put it to work I gave up. But then I figured; “Hey, I’m writing JavaScript in node.js. Why don’t I just set up the client-side Mustache implementation as a node module?“.

So node.js, as you might know, is an implementation of the CommonJS API. CommonJS defines a really simple way of turning your JavaScript files into modules for use in any CommonJS implementation.

exports.to_html = to_html;

This single line was all I needed to add to the JavaScript file I used on the client. I could do some error checking too see if exports is defined, if I wanted to, and that would give me a way of using the exact same file on both the client and the server. However, I decided to hade two files since I wanted to put the node module with my global node modules for use in other projects.

Taking it to the next level

Today I decided that I wanted to try and do the same thing with the SyntaxHighlighter library. That turned out to be a little trickier.

When Alex wrote version 3.x of SyntaxHighlighter he, wisely, decided that it should be a CommonJS module. He even wrote a test that uses it in node.js. The problem with his implementation lies in what features he exposed.

When you run SyntaxHighlighter in a browser, you make a call to a function that gets all HTML-elements containing code (by default the <pre>-elements). Then it gets their innerHTML attribute and does its magic with it. When it’s done it replaces the <pre>-element with the new pretty <div>-element containing all the highlighted code.

This doesn’t work so well on the server since it uses DOM-manipulation and the server side has no DOM (although some people are trying to build DOM access as a node module but that feels dirty to me).

What Alex did was expose a function that renders a string containg the code into a string containing the pretty <div>-element. Like this (from his website):

var sys = require('sys'),
  shSyntaxHighlighter = require('shCore').SyntaxHighlighter,
  shJScript = require('shBrushJScript').Brush,
  code = '\
    function helloWorld()\
    {\
      // this is great!\
      for(var i = 0; i <= 1; i++)\
        alert("yay");\
    }\
  ',
  brush = new shJScript();
 
brush.init({ toolbar: false });
sys.puts(brush.getHtml(code));

That’s great if you have a lot of code examples in a database that. I however have all my code examples embedded in blog posts, surrounded by a lot of text and HTML. I want it to work in the same way as it does client-side.

So I hacked it. It’s not pretty, it still contains a lot of code that I don’t need and it involves a bit of XRegExp magic (XRegExp is awesome btw).

What it does do is work, beautifully. And now I can do this:

var sh = require('./shCore').SyntaxHighlighter;

// requiring brushes loads them into the sh-object somehow
var bashBrush = require('./shBrushBash').Brush;
var csharpBrush = require('./shBrushCSharp').Brush;
var cssBrush = require('./shBrushCss').Brush;
var jsBrush = require('./shBrushJScript').Brush;
var xmlBrush = require('./shBrushXml').Brush;

//----- I cut out the fetching blog post stuff from here -----//

// apply syntax highlighting
blogpost.htmlContent = sh.highlight(blogpost.htmlContent);
		
// apply template
var html = mu.to_html(templates['blogpost.mu'], blogpost);

// write it back to the response stream
response.writeHead(200, { "Content-Type" : "text/html" });
response.end(html);
return;

It also saves my readers from having to download 166 KB worth of data distributed over six HTTP requests, and that, ladies and gentlemen, is a win.

by Mikael Lofjärd

SplitCode v0.1 - A success story

Socket.io almost makes it to easy. Here’s my early early alpha code. Works like a charm.

Server code (for node.js):

var io = require("socket.io").listen(1338); 
io.sockets.on('connection', function (client) {
  client.on('codeChanged', function(code) {
    client.broadcast.emit('changeCode', code);
  });

  client.on('scrolled', function(position) {
    client.broadcast.emit('scroll', position);
  });
});

Master.html:

<!doctype html>
<html>
<head>
  <meta charset="utf-8" />
  <title>SplitCode Master</title>
  <script src="socket.io.min.js"></script>
</head>
<body>
  <textarea id="mastertext" onKeyUp="maybesend();" wrap="off"></textarea>
  <script>
    var master = document.getElementById("mastertext");
    var socket = io.connect('192.168.0.15:1338');

    var th = {}; 
    function maybesend() {
      clearTimeout(th.timeout);	
      th.timeout = setTimeout(send, 100);
    }

    function send() {
      socket.emit('codeChanged', master.value);
    }
  </script>
</body>
</html>

Slave.html:

<!doctype html>
<html>
<head>
  <meta charset="utf-8" />
  <title>SplitCode Slave</title>;
  <script src="socket.io.min.js"></script>
</head>
<body>;
  <textarea id="slavetext" wrap="off"></textarea>
  <script>;
    var slave = document.getElementById("slavetext");
    var socket = io.connect('192.168.0.15:1338');

    socket.on('changeCode', function (code) {
      slave.value = code;
    });
  </script>
</body>
</html>

Nevermind all the non compliant HTML or other code quality issues. The story here is that it is dead simple to use socket.io and it is blazingly fast.

by Mikael Lofjärd

Coding on stage

I held a tutorial on HTML5 today at work and my preparations led me to think about coding while doing a presentation. This was done a lot at Øredev, but everyone did it in the same two ways. Either you duplicate your screen and let your attendees see everything that goes on on your desktop, or you extend your desktop, keeping your notes on the computer screen and your code on the projector.

The problem with the first option is that you need to have your notes on paper. The problem with the second option is the strain it puts on your neck when you have to look at the projector while coding and at the same time facing the attendees.

What I wanted was a window on my desktop that I could write code in and another window on the extended projector screen that would show the code in realtime while I was typing it.

“To the google-mobile batman!”
As usual someone has already done it, but only on Mac.

I’m a Windows person most days so that wouldn’t work for me. I decided to write my own. The first prototype, which I used today with some success, was a web page with two big textarea-elements which I dragged out across both screens. I wrote in the one on my computer screen and it replicated the text on the other textarea on the projector. It worked but it wasn’t pretty.

Enter node.js and socket.io

This is what I will try tonight: a kind of server/master/slave solution with socket.io/node.js on the server side and html/socket.io in the clients. This will allow me to have separate windows and even separate computers if I want to.

Stay tuned for my failure/success story tomorrow.

by Mikael Lofjärd

Double linked paging in CouchDB

Yesterday I said that I would look into paging, as my post count had reached 10+, so that’s what I did today.

Paging in CouchDB isn’t all that straight forward for a bunch of reasons that I’ll try to explain.

Firstly, to be able to query anything in Couch you need a View. A Couch View is basically a really fast hash of your documents that is constructed with a little piece of javascript (like everything in Couch).

In my case it looks like this:

function(doc)
{
  if(doc.type === "blogpost" && doc.published) {
    emit(doc.dateTime, doc);
  }
}

The execution of this code is what makes Couch so fast for reads. The magic can be read about here and if you’re interested in how Couch maintains its indexes, then it’s a good read. It is however not the point of this blog post.

The important thing to know about the above piece of code is that it returns all published blogposts with the dateTime-attribute as the Key and the whole document as the Value.

The reason for using a date and time as the key in this view is that Couch always sorts its views on the key and I wan’t my blog posts to be returned to me in reversed chronologic order.

CouchDB has a few interesting arguments that you can set when querying a view:

  • key - If set, the query will only return documents with that key.
  • descending - Is used to set the sort order.
  • startkey - The start key of a range of keys to query for.
  • endkey - The end key of a range of keys to query for.
  • limit - Select only a set number of documents.
  • skip - Skip a number of documents before yielding results.

Easy paging

Now the easy way of doing this would be to set limit = page size and skip = page size * page number (zero-indexed of course).

The problem with that is that skip is an expensive operation in Couch. In my case with 10-20 blog posts by now it wouldn’t even matter but if it’s worth doing, it’s worth doing it right.

Double linked (awesome) paging

Single linked paging wasn’t that hard either but I wanted more then an “Older posts” button. I (for some weird reason) was hell bent on also having a “Newer posts” button so that I could step freely forwards and backwards through my posts.

The trick relies on using the startkey-, limit- and descending-arguments.

On the front page I use an “A” as my startkey. I also set descending = true so that I will get my results in reverse order. When querying with descending = true I needed a value for startkey that was greater than any I could have as keys in my view. Since my keys were timestamps starting with the year number I knew that a letter would be considered “greater”. (remember the reverse part)

I then set limit = page size + 1. The +1 document is never displayed on the website but I use its key as my startkey for the next page. And if the result size is less than page size + 1 then I have reached the last page.

The “previous page”-button was a bit trickier to figure out but the solution was quite simple.

The same way I use the +1 document as the startkey to get to the next page I use the first post in the result as the startkey to get to the previous page, only this time I query with descending = false and limit = page size + 2.

If the result size is less then page size +2 then I’m at the first page, and all I have to do then is reverse the result manually and then do all the same stuff as before.

Code

Here is the code to help you figure out what I just said. I’m not really that good at explaining this stuff in the middle of the night. =)

function page(request, response) {
  var key = url.parse(request.url, true).query.key;

  var startKey = 'A';
  var pageSize = 8; // 7 for show and 1 for reference
  var desc = true;
  var firstPage = true;

  if(key) {
    var prefix = key.substring(0, 1);
    desc = (prefix != 'p');
    startKey = key.substring(1);
    firstPage = false;
  }

  var nextPageKey = null;
  var prevPageKey = null;

  var queryargs = { descending: desc, limit: (desc ? pageSize : pageSize + 1), startkey: startKey};
  db.view('posts/publishedByDateTime', queryargs, function (err, doc) {
    if (err) {
      console.log("CouchDB: DB Read error");
      error(request, response, 404);
      return;
    }

    if (!desc) {
      if (doc.length < pageSize + 1) {
        firstPage = true;
      } else {
        doc.pop();
      }

      // reverse doc if we came from a "prev page" click
      // since we then selected with descending: false;
      doc.reverse();
    }

    // if result size = page size then remove the last item
    // in the array but store its key for the next page button
    if (doc.length == pageSize) {
      nextPageKey = doc.pop().key;
    }

    // if there are results and we are not on the first
    // page then set the key for the previous page button
    if (doc.length > 0 && !firstPage ) {
      prevPageKey = doc[0].key;
    }

    // create a list of posts and keys for use in the html
    var postList = {
      posts : doc,
      olderPosts: (nextPageKey != null),
      nextPageKey: 'n' + nextPageKey,
      newerPosts: (prevPageKey != null),
      prevPageKey: 'p' + prevPageKey,
      renderPageNav: ((prevPageKey != null) || (nextPageKey != null))
    };

    // send JSON to Mustache
    var html = mu.to_html(templates['list.mu'], postList);

    // write HTML to response stream
    response.writeHead(200, { "Content-Type" : "text/html" });
    response.end(html);
    return;
  });
}
by Mikael Lofjärd

Somewhat backwards compatible

I got the chance to see my blog through the eyes of Internet Explorer 8 today, and boy was that something to behold. There was no styling what so ever. It looked like the internet pre 1995.

This site uses HTML5 and CSS3 and that is fine for most modern web browsers. Even those that has no support for HTML5 renders ok, mostly due to the fact that they ignore the tags they dont recognize and just renders them as they would any <div>-tag. But not dear Internet Explorer. Internet Explorer < 9 just ignores the tags and doesn’t style what it doesn’t know. Awesome!

Good thing then that a couple of guys realized that you could trick IE into rendering unknown tags by creating them with javascript.

document.createElement("mysuperspecialtag");

The above piece of javascript is all that is required for IE to accept all <mysuperspecial>-tags and style them with CSS.

Thankfully they built a little script that does this (and a bit more) for all the new HTML5-elements.

<!--[if lt IE 9]>
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->

Include the above in the head of your HTML document and you’re good to go.

Now this just solves the IE problem. The default in HTML5 is to render most of the new elements as blocks. But the older browsers don’t necessarily do this so it’s considered best to do this implicitly in your CSS file.

header, nav, hgroup, section, article, footer, address {
	display: block;
}

The above are not all of the new HTML5 elements but you can add whatever else you use on your site.

DISCLAIMER: I’m at home right now with nothing but the latest and greatest browsers at hand so I can’t check that my changes actually worked, but at least I’ll get to look at it with IE 8 again tomorrow.

by Mikael Lofjärd

The Archive

While looking at my Google Analytics statistics I noticed that my empty archive page had almost as many hits as the front page.

And so, with little to loose, I set out to create a dynamic archive.

Now I don’t even have paging on my front page yet, but since this is my 10th post I really should make that my next priority.

by Mikael Lofjärd

Notes from Øredev

As you know, I went to the Øredev conference in Malmö, Sweden last week. It was a great conference and I saw a lot of great speakers talk on a lot of great topics.

The first two days of Øredev were “workshop” days. On monday we had a full day workshop with Greg Young teaching us the ins and outs of CQRS. The prerequisite to this workshop was a 6.5 hour long video. Between that and the 8 hour long workshop there was a lot of learning going on on the topic of CQRS, Event Sourcing and DDD in general. I always enjoy watching Greg speak because he’s very passionate about everything he does and that always gets me going.

Tuesday was NServiceBus day. Andreas Öhlund talked about NServiceBus and how it can be used to implement CQRS and Event Sourcing. It was nice to get an amount of repetition from the day before and get a different view on things. I find that things get rooted in my brain much better if I hear at least two different people explaining them.

Wednesday, Thursday and Friday were when most of the attendees showed up and all the little 50 min lectures got started. There were several different tracks to go to and about 6 lectures each day plus the opening and ending keynote.

Here are some of my favourites:

Jon Skeet held a talk about the new async and await keywords in C# 5. True to his nature as a self-proclaimed C# enthusiast he also went ahead and explained how it all worked inside the MSIL compiled code.

Directly after that on the .Net track, Gary Short held a very useful talk on collection classes in C#. I work in C# every day, and I must admit, I knew there were advantages to using different collection classes for different things but I never felt the need to sort out which one to use. Gary tried to explain the pros and cons of as many classes as possible in his 50 minute lecture, but he did a great job and I really felt like I had learned something afterwards.

Phil Haack held a talk on mobile web applications, which apart from the stuff coming in ASP.Net MVC 4 was pretty much what I wrote about two weeks ago. Still, he was a funny guy and I enjoyed his talk.

In the last session of the day I took some of my co-workers to see Felix Geisendörfer give a talk on node.js. His blog at debuggable.com was of great use to me when I started playing with node.js and I felt that someone needed to educate my co-workers on the matter. For some reason they stop listening as soon as I get to the “Server side javascript” part.

Thursday begun with an awesome keynote presentation by Dan North called “Embracing Uncertainty”. If you ever get a chance to see it, don’t miss it. Gojko Adzic has a great write up of the keynote here.

The rest of the day I mostly indulged myself on the UX track where I went to two great talks by Robby Ingebretsen on “Digital Typography” and “Design composition for developers”. Robby was a great speaker. He used lots of examples and shared a lot of great tips and tools that I, as an aspiring designer, found extremely useful.

Unfortunatly we went home on Thursday evening which means that I didn’t go to any lectures on Friday. Instead I slept through half that day and in the afternoon I started working on the database backend for the blog.

by Mikael Lofjärd

Sorry, sharing is not available as a feature in your browser.

You can share the link to the page if you want!

URL