Friday, January 21, 2011

The Impact of HTML5^H: Feature Detection

Ian Hickson posted a day or two ago that the WhatWG has decided to drop the version number from HTML. On the face of it, it seems like a knee jerk reaction to fight an old rival that just announced the HTML5 logo, but it's a lot more than that. In Hixie's words:

The WHATWG HTML spec can now be considered a "living standard". It's more mature than any version of the HTML specification to date, so it made no sense for us to keep referring to it as merely a draft. We will no longer be following the "snapshot" model of spec development, with the occasional "call for comments", "call for implementations", and so forth.

What does this mean for us developers? It means that the spec will never be finished — the living standard bit — so there's no point waiting for an announcement saying that HTML5 is ready or anything to that effect. But that's just on the surface.

For front-end developers it means one more very important thing. It means that feature detection is more important than ever before. It'll be the only reliable way for us to determine browser support for features. Of course, we always knew that but rarely did we ever practice it. You think you might have, but the libraries you have been using didn't actually do this. Until very recently that is. jQuery removed all browser sniffing in jQuery 1.3 and the guys from Dojo have launched a project called has.js meant for library authors to incorporate feature detection. Of course, Modernizr has been around for a long time now, and its list of detectable features keeps growing.

But detecting features alone isn't enough. You need a way to work around them. This is where polyfills come in the picture. As Remy Sharp describes it,

A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.

That way, feature detection happens at a lower level than the developer, and the developer can continue coding to the WhatWG/W3C specs. If this works as well as it says on the can, I think we can all move forward, and not worry about little pains like IE6.

What are the factors to be considered when creating a way to download a polyfill? The first is that there has to be a way to declare a dependency on a browser feature. Depending on browser feature detection, one or more polyfills might be downloaded for that browser. Secondly, the download of the polyfill will necessarily be asynchronous and the coding paradigm will have to take that into account. Thirdly, ideally the download of the polyfill itself shouldn't slow down the experience. This is impossible to do, but at least we should employ whatever techniques we have to reduce the impact of the download.

How would such an API look? CommonJS has been doing some great work with the Asynchronous Module Definition spec to help solve the general problem of dependencies in JavaScript. Maybe we can look at their API for help. To define a module, you'd generally write code that looks like the following (from their own example):

define("alpha", ["require", "exports", "beta"], function (require, exports, beta) {
exports.verb = function() {
return beta.verb();
}
});

That looks simple enough. The first parameter of define is the name of the module, the second is a list of external dependencies that the module has, and finally it's the declaration of the module itself. Using RequireJS, the way to use such a module would be, (again, from their own examples)

require(["helper/util"], function() {
//This function is called when scripts/helper/util.js is loaded.
});

That's a simple enough API. Unfortunately, we can't use this directly for our polyfills, since we'll need to only download the dependencies if the browser doesn't already support them. So, maybe for our purposes, we'll need to have an API that looks like this:

require(["our/polyfill/helper"], function() {
requirePolyfills(["localStorage", "geolocation"], function() {
// Polyfills have been downloaded and applied now if necessary.
// We can code straight to the WhatWG/W3C APIs without worrying about browser differences.
});
})

The requirePolyfills function is responsible for downloading the polyfills necessary depending on browser support in an async manner, and then calling the function passed in to hand over control to the developer. The developer then doesn't have to worry about browser differences and can write simple code completely devoid of browser specific hacks. It gets me excited just to think of how awesome it will be to program once we've got browser differences out of the way.

Taking this a level further, this could be done by library authors itself, to ensure that the footprint of the library is as small as possible. This way Chrome users will have an excellent experience since there's no polyfill to be downloaded, while IE6 users will have the worst experience since many polyfills will have to be downloaded. If you think that's worse than the state of affairs today, consider that Chrome users currently have to download all the browser hacks for dealing with IE6 that exist in the library, and then the code-path never hits them. Wasted download, and the worst possible experience for all users thanks to IE6.

Libraries are slowly migrating towards the CommonJS AMD spec. Dojo just announced partial support for it in their latest version. The landscape is bound to get better. If polyfills were to be part of the library itself, it would be awesome. That way, we ensure that the library has the smallest footprint possible.

That leaves us with two more problems. First: we need to develop a public repo of polyfills that are very well tested. The quality of polyfills is not something that library developers should be fighting over. Also, there's no reason for a polyfill to be tied into a library's APIs. And there's no reason why one shouldn't be able to use a polyfill without a library. The second problem is slightly more tricky. The problem is that of network latency. You wouldn't want to download a bunch of polyfills over slow network connections. Ideally, it should be just one file containing all the polyfills required for that browser. This might necessitate a change the API I've mentioned above. Feature detection obviously happens on the client, but then the client issues only one request for the polyfills file. The request could look something like: http://example.com/polyfills?include=localStorage,geolocation. The server basically just concats the polyfills the client has requested, and sends them down in one file. Awesome.

Unfortunately here I've added a server component which suddenly makes this far less appealing to library authors, and for good reason. I don't know of a clean way to fix this, except saying that there should be publicly maintained servers that do this for everyone. Something like Google's CDN. This solution isn't ideal for several reasons, but it might be a good stop gap. Of course, there's also the possibility that open source implementations of such polyfill servers can be developed for several languages, maybe even just web servers — I'm thinking mod_polyfill or something similar.

The best solution is not clear yet, but we have an idea of what it might look like. We'll have to see how things evolve in the coming few months to get clarity on this. Meanwhile, what do you think about this? Do you think there might be a better way to solve this problem?

7 comments:

Yacine said...

A question about feature detection. Why is this not handled at the HTML standard level? Shouldn't browsers expose info on the behavior they support (or don't)? Why is this handled at the javascript level?

Rakesh Pai said...

@Yacine: Almost all features being developed today have some thought being given to JavaScript detection, AFAIK. Also, a lot of this happens automatically because of the nature of JS. However, I don't think this is what you are talking about. I didn't understand what you mean by "HTML standard level". Could you please elaborate?

Angel17 said...

Such an interesting blog. Thanks for sharing. asphalt driveway indianapolis

All Play Does it All said...

When it comes to shade canopies in Houston, we take pride in being the premier provider of high-quality and durable options. Our canopies are expertly crafted using the finest materials, ensuring long-lasting performance and resistance to the elements. You can trust that our shade canopies are built to withstand the unpredictable Houston weather, including heavy rain, wind, and intense sun exposure.

In addition to their resilience, our shade canopies are also incredibly versatile. They can be easily installed and customized to fit your unique space, allowing you to create a shaded area that perfectly suits your needs. Whether you need a small canopy for your balcony or a large canopy for your outdoor event, we have you covered. Say goodbye to uncomfortable sunburns and hello to the comfort and protection provided by our shade canopies Houston.

assignment help auckland said...

It's amazing how HTML5 has transformed web development, making it more flexible and powerful. Your explanation of feature detection versus browser detection was clear and easy to follow, shedding light on best practices. Looking forward to more tech updates from your blog!

dissertation proofreading services uk said...

HTML5 feature detection improves web development by enabling customised experiences according to browser capabilities, guaranteeing consistent functionality on various platforms. For developers hoping to provide reliable, flexible online solutions that accommodate different user contexts, it's a game-changer.

umer said...

From personal experience, feature detection for HTML5 breakthrough remains important in ensuring a consistant user experience. That way, I can be sure of the gaming desktop performance regarding the latest web technologies. That is why using feature detection approach is beneficial for me in order to ensure good performance and nice user experience for the players.






ShareThis