27/10/2011

Super efficient HTML pages/apps for mobile and desktop

I've been toying with this idea and I can't say I've seen it before... it goes a little something like this:

Take this blog ( I didn't write the software obviously but thats beside the point ), there is the page chrome ( the stuff around the edges ) and the content. The content consists of a bunch of ( uninteresting ) blog posts.

The page chrome isnt overly interesting... its exactly the same for every page you view - and its sent every time you view any page. Sounds bad to me! Plus for mobile users we probably dont even need half the chrome. We might want a little header or someting but not all the sidebars and whatever. Now you could say "media queries!" but that doesnt stop useless HTML being sent to the browser.

The blog posts themselves can obviously cached as well. Once we've seen them (or at least the browser has) we really don't need to fetch them again unless they updated for some reason.

( If I'd thought this through a bit better I would of used a different example now... ) If this was some sort of web app we'd also have a bunch of JS that would be loaded... but we don't... but imagine we do.

So I see a lot of waste here to show a page of this blog. I dont need the chrome, its probably not changed. I dont need the (imaginary) JS code, its probably not changed. I dont need the posts we've already seen - they probably haven't changed... Urrrm... we probably do want to see if there any new posts! So you can see there is quite a lot of waste going on here.

The classic method of optimising your HTML page is to heavily cache CSS ( oh i forget CSS! ), JS and images. The best way to do this in my opinion is use some sort of revisioned URL's. For example /img/r12345/foo.png. You can grab the revision from your repo in your build scripts or just use a hash or whatever and shove it in the url in your HTML/CSS/JS so that links reference a specific version of the file. That file will have headers to never ever ever expire. When you update your HTML/CSS/JS with a new URL the browser will refetch the resource. Its absolutly the best because the browser will never attempt to refetch a resource. What if we could take this idea a little further...

I'm a massive fan of what I call JS templates - essentialy any form of client side templates. These are great. They mean you send a load of JSON to the browser and it can render HTML client side ( because we all know heavy DOM manipulation to create/update/delete DOM elements is a sin right? ). And plus im always polling or doing websockets or something to pull in new data and then I need the templates client side anyway. So what if we use client side templates to render the page chrome...? "What? Why?" your saying I would think. Well one reason is because in the client you can check the environment your running in a whole lot better than guessing by looking at the UA on the server.

I'm going to stop talking now and show some code:

index.html:

<html><head>
<meta name="viewport" content="width=device-width" />
<script src="/js/r12345/main.js"></script>
<body></body></html>

main.js:

var handheld = window.matchMedia('only screen and (max-device-width: 480px)').matches;

if( handheld ) {
  some_module_loader.require([...JS/templates/CSS required for mobiles ...]);
} else {
  some_module_loader.require([...JS/templates/CSS required for desktops ...]);
}

So we've sent the browser a single index file. It just loads a single versioned JS script which in turn loads a bunch more versioned URLs that are specific to the device its running on. We can now perfectly tailor the experience to the device. Also new versions of files will only be downloaded when they change.

The downside is, if you were visiting this blog, we'd request once our tiny HTML file and then you'd need one more request to check there were no new blog posts. This may or may not be a problem depending on what your doing. It would be kind of bad for a blog but for an email app it wouldnt be too bad. ( I'm currently having wild ideas about cookies sending the last blog post they saw in the initial request which would then mean you could send in the HTML file a list of new blog posts too... not quite thought that through... ). The upside to this techique is of course incredibly small requests. I have absolutely no profiling data to show for this idea - but it does sound pretty good!

So has anyone done this? Is it stupid? Tell me!

2 comments:

Unknown said...

It looks pretty similar to what GWT does to handle :
- efficient caching of JS files : they're identified by their hash
- multi-platform/browser management : If you're on firefox, you only get the ffox Javascript and not all the IE crap.

Dunk said...

Thanks for the hint - i'll take a look and see what they do.