Tuesday, October 16, 2007

Client-Side Performance Optimization of Ajax Applications

There has a lot been said about server-side performance optimization. But a recent report from Yahoo concluded that the server accounted for less than 5% of the time for a user to view a web page. Here's how you can performance optimize the client-side code. Note that this article is targeted towards pretty advanced JavaScript programmers, working with pretty client-heavy applications.

Network performance

  • This feels just stupid to say: Reduce the amount of source-code you've written. There are several ways of doing this. The first is to simply not write any JavaScript at all. But that might not be an option for you. Another way is to lazy load code - don't download code unnecessarily on the client. This is especially true of single page applications. Another thing you simply must do is pass your code through a good compressor - like Dojo's Shrinksafe, Crockford's JSMin, Edwards' Packer or the YUI Compressor (or even a combination of those).

    Another thing that I've heard most people recommend is that gzipping of JavaScript files helps reduce network latency. While this is entirely true, a bug in a rather prevelent version of IE makes me wonder if I should do that. If anyone can prove me wrong, I'll only be glad.

  • In all popular browsers today, JavaScript files are always sequentially downloaded. It doesn't matter if HTTP says you can have 2 connections open. It doesn't matter that CDNs on different domains can serve files in parallel. It doesn't matter that Firefox always disregards the HTTP restriction and downloads multiple files all the time. When it comes to JavaScript, all files are always downloaded sequentially.

    This only makes sense, since script should be executed in the order in which they appeared in the markup. Reduce the number of files to be downloaded. This can be done by using a build process, something similar to what the Dojo build system does - combining the JavaScript files into one file.

  • Cache Ajax responses. The network is undoubtedly the weakest link in the chain. Avoid making requests. A request once made should never be made again. Implement cache invalidation if you need. Even then don't fetch data just because the cache is now invalid. Wait till it is actually required. If you never end up needing it, you've saved another hit.

Perceived performance

  • Reduce the amount of source-code you've written. I know this sounds like a repeat of a point above, but I had to bring this up from a perception point of view too. It seems that the more JavaScript that's downloaded, the more time it requires for the browser to interpret it, increasing at an exponential rate (not linear). Which means that even after your code has been downloaded, the browser will just sit there doing (apparently) nothing for some time. Usually, this is a problem above say 500 kb of code.

  • In any case, if you are downloading 500k of JavaScript on the load of the page, there better be a very good reason for it. You should be able to have much faster download times by splitting up these files into modules, which you can download at a later time - maybe on demand.

  • Get something downloaded and displayed as soon as possible. This might be something as simple as markup with the UI skeleton for the application, and simple "Loading..." indicators. It helps a great deal in reducing the frustration in working with an application.

  • If you can help it, put your JavaScript includes at the bottom of the page. This gives the browser enough time to download and render most of the page before even starting to mess with your scripts. Considering that JavaScript downloads sequentially, and doesn't let any other resource be downloaded at that time, you should only download JavaScript once you already have something to show the user.

JavaScript performance

There's a lot that can be said here. I've started getting a lot of kicks lately in trying to milk every millisecond of performance from a browser. So here's what I've learnt so far.

  • If you are writing a function that returns an array, you usually want to pass in a callback as a parameter of that function. This will improve the performance by at least a 100%.

    Instead of:


    var anArrayOfData = getAnArrayOfData();
    for(var i=0; i<anArrayOfData.length; i++) {
    // do something with anArrayOfData[i]
    }

    Do the following:


    getAnArrayOfData(function(item) {
    // do something with item
    });

    This is better because you usually loop inside the function anyway to build the array. Having to loop through the returned array again is a waste of processing time.

    Instead of:


    function getAnArrayOfData() {
    var returnData = [];

    for(var i=0; i<largeSetOfData.length; i++) {
    // Some code...
    if(condition === true) {
    returnData.push(largeSetOfData[i]);
    }
    }

    return returnData;
    }

    Do:


    function getAnArrayOfData(callback) {
    var returnData = [];

    for(var i=0; i<largeSetOfData.length; i++) {
    // Some code...
    if(condition === true) {
    returnData.push(largeSetOfData[i]);
    if(callback) {
    callback(largeSetOfData[i]);
    }
    }
    }

    return returnData;
    }

    This way, the callback parameter is optional, and you still return the returnData, but you could also provide the callback function and avoid another external loop to iterate through the return data. I've changed all the getElementsBySelector methods in my libraries to use this approach, for example. It only seems logical - if I get an array, I will usually need to iterate through it.

  • Use native functionality whenever possible. Case in point: forEach iterators. This is very helpful, and part of the 1.6 standard, but the most popular browser in the world can't do forEach loops. Most people either live with it, write their own forEach iterator using simple for statements, or use a library that already has this built in. If you are the second type of person, you aren't achieving much except code readability, which is not a bad thing. Most frameworks' forEach loops also take much the same approach. However, there's a better way.


    function myForEach(array, callback) {
    if(array.forEach) {
    array.forEach(callback);
    } else {
    for(var i=0; i<array.length; i++) {
    callback(array[i]);
    }
    }
    }

    In almost all browsers, the block in the if statement will be executed, giving the best possible performance, since you are using native functionality. However in less modern browsers that happen to be more popular and which you have to support but can't do much more about optimization, the else block will still work. Think of it as being the graceful degradation principles of CSS being applied to performance.

    Now, I haven't done it above, but I strongly recommend that you stick to the JavaScript standards when deciding the function signatures of both the myForEach and the callback functions. This is because if the world does become a better place one day, and the most popular browser in the world actually learns how to be modern, you code will work such that it uses the most optimum features in the browser, without you having to change a single line of code.

  • Don't build too much of scaffolding code to make JavaScript behave like classical object oriented programming languages. Usually, you will not end up with much more than helper functions. A lot of the paradigms of classical OO don't apply to JavaScript. Learn to use JavaScript for what it can do. Don't make it what you want it to be.

  • Use threads. Ok, JavaScript doesn't really do threads at all. However, you can kinda simulate the effect of threads. What you achieve by doing this is that you hand over control back to the browser for a brief instant before proceeding with your code. This gives the browser time to react to any other user action that might have happened, make any updates to the DOM that you had asked for, bypass that nasty The script on this page is unresponsive warning, etc. So, how do you do this?


    // some code
    setTimeout(function() {
    // some more code
    }, 0);

    If you don't understand how exactly this works, this could be a source for a lot of bugs. Use this with caution. However, I've used to get a very high apparent performance very successfully.

  • Cache function results. If for a given set of parameters a function will always return the same results, you really only need to calculate it the first time. Once calculated, save the data in a variable, and read from that variable hence forth. For example:


    var squaresCache = {};
    function getSquare(number) {
    if(!squaresCache[number]) {
    squaresCache[number] = number * number;
    }

    return squaresCache[number];
    }

    The example above isn't very good for at least two reasons. Firstly, using this pattern for computing squares is just plain stupid. Secondly, it seems (though it need not be) that squaresCache is a global variable, which is plain evil in any programming language. However, I hope it illustrates the idea of populating the cache the first time the function executes and subsequently reading from the cache instead of re-calculating the data.

  • Strings in JavaScript, like in many other languages are immutable. So, for lots of string concatenation operations, you need to use the string builder pattern in JavaScript too. The simplest way to do that is to declare an array instead of a string, push strings into that array instead of concatenating, and finally call array.join(""); to get the concatenated string.

  • Do not use eval. Eval is painfully slow. But why do you need to use eval? Other than converting a JSON string to an object, I never write any code that needs to be eval'ed. Remember the other cousins of eval - new Function(someString) and setTimeout(someString, ms). You don't need the Function constructor at all, and you don't need to pass strings into setTimeout at all. Instead, in both cases, you can use anonymous functions. Thus, the implicit eval is avoided. Using anonymous functions give the added benefit of retaining variable scope through the creation of the closure. The eval is always carried out in the global scope.

  • Unfold your if statements. This is particularly useful for code that checks for browser features. For example, instead of:


    function addEvent(element, eventName, callback) {
    if(element.addEventListner) {
    // add the event one way
    } else {
    // add the event another way
    }
    }

    Do the following:


    if(element.addEventListner) {
    function(element, eventName, callback) {
    // add the event one way
    }
    } else {
    function(element, eventName, callback) {
    // add the event another way
    }
    }

    This unfolding of ifs applies even to loops, for example. So, if you can keep if statements outside a loop, do that. It doesn't make for readable code, but it's significantly faster. Bonus points to you if you just thought to yourself that my forEach example can be improved using this technique.

DOM Performance

Of all the parts making up client script, DOM manipulations are the slowest. So, you have to take the most care here.

  • Use innerHTML. Don't be too much of a purist. Being a purist won't make your application faster. You wouldn't believe how much faster your code is if you use innerHTML.

  • Never update the DOM. Ok, if that's not possible, at least do it as infrequently as possible. Bunch up your updates to the DOM and save them for a later time. Realize that it is not the size of the update but the high frequency of updates that's slow. Doing appendChild in a loop is updating the DOM frequently. Caching the markup in a string, and then setting the innerHTML in the end is batching and updating infrequently. The second is much faster.

  • However, the technique above is mostly only useful if you are adding new stuff to the DOM. What if you are updating existing elements on the DOM? How do you keep updates to a minimum when you want to change style, class names, content and children of a node that already exists? Simple. Clone the node you want to work with. Now you will be working with a clone of the real node, and the cloned node doesn't exist in the DOM. Updating the cloned node doesn't affect the DOM. When you are done with your manipulations, replace the original node with the cloned node. However, note that the performance problems here are because of the content and rendering reflow that the browser has to do. You might get similar benefits by simply hiding the element first, making the changes, and then showing it. Though I haven't tried this, it should work in theory.

  • Keep track of events. For me, this is the worst part of working with the DOM. This is important because when your application (or any DOM nodes) are being unloaded or destroyed, you will have to manually unregister the events from the nodes BEFORE you destroy the elements. Yes, this is the garbage collector's job, and that's supposed to be the job of the environment your code runs in, but guess which browser is the offender here. Internet Explorer doesn't free all the references even when the user leaves your web page. Unless you want your web app to earn the reputation of being responsible for many a crashed browser, and a horrid browsing experience for other websites too, count your references.

  • If you are going to iterate through a node list to attach event handlers, you are probably wasting processor time. Instead, simply attach the event handler to some parent of the node list and read from the event object to know what was clicked on. You save the cycles required to iterate over the nodes this way.

  • Avoid calls to functions like getElementsBySelector, where there's lot of DOM walking involved. If you cannot, then make sure you work on as small an area of the DOM as possible. If your favourite version of getElementsBySelector lets you send in a root node under which to search, do that. Otherwise, provide a very high specificity, starting with a "#someId" so that the function can narrow down the search. Also, understand how these functions work internally. For example, you could use a getElementsByClassName to find divs with the class "foo", and the implementation of getElementsByClassName will probably be just three lines, However, using getElementsBySelector("div.foo") will be faster in almost all frameworks, even though it might have a hundred lines of code in it's implementation, since it has less DOM walking to do.

Sorry for the kinda horrible organization of this post. I should also say that not all of these ideas are my original - I found a lot of these by reading many sites scattered across the web. However, I hope you found this post useful.

6 comments:

Zohnee said...

Now Thats what i call learning from your experience... I remember you had a argumnet with me telling me that using innerHTML is not the right way of doing....

Rakesh Pai said...

Hey John,

I would still say that innerHTML is not "the right way", but what the hell - it works, and it's fast.

Sorry if I actually had an argument. Didn't mean to piss anyone off.

Zohnee said...

Hey Rakesh,

You know who can piss me off... besides.. Thats the funda... If you have to choose between speed and accuracy choose speed. BTW this is only for IT application development.. considering technology changes fast.

arshiya fouzia said...

Thanks for sharing good information.
Web designing course in chennai

Gabe said...

Hello mate nice bloog

Pierre said...

I enjooyed reading this

ShareThis