Thursday, January 27, 2005

Thoughts on XMLHttpRequest

Some more thoughts on XMLHttpRequest, from my recent experiences with the tool.

Continue Reading...
I was going through my referrer list today, and come across another user who came to me through Eric Meyer’s post about (and in response to my older post) the Economics of XHTML, where he (Eric) talks about how the savings in bandwidth appreciates the value of the site.

These days, I have moved away from talks about markup (as most of you must have noticed), mostly for two reasons.
  1. XHTML is too stringent. It is so stringent, it is not practical.
  2. There are other ways to save bandwidth.
I have to explain. Though I’ve been talking about standards frequently on my site, only recently have I decided to get my hands dirty with a project that talks in XHTML and respects it (which is more difficult that it seems at first glance) at least when talking to browsers that can handle it well.

Reasonable digress: Of late, I have been damn busy. I guess everyone goes through these phases where he thinks that he’s reading too much RSS, but I’ve just stopped keeping myself updated because I don’t have the time anymore. However, I still take the time to comment on the odd good article.

One such post was Roger Johansson’s. He talked about how XHTML can be made easier to use, but in his comments (though he tried hard not to), the discussion shifted to whether XHTML is really worth the effort or not. I maintained that though XHTML is a good way to learn, it is probably not what we’ll use in practice.

That apart, here’s why I am making this post.

I’ve been working on a lot of DHTML these days. Not in the traditional sense – I probably have to call it DOM Scripting to be politically correct. XMLHttpRequest has been my most interesting toy. I’ve been playing with it like a baby boy plays with his first toy truck. He just rides it all over the place. He runs it over terrains that the toy couldn’t possibly take on, just because somewhere in his heart he’s confident that it’ll hold out. XMLHttpRequest has been my toy, and the web is my terrain.

Here are my lessons, good and bad, listed out, while working with XMLHttpRequest.
  • I think this is the most important issue. Using XMLHttpRequest doesn’t change the URL. Think about this. Eric said that a page that loads faster is good for business, and XMLHttpRequest does that for you in more ways than one. But XMLHttpRequest doesn’t respect URL resources, as seen by the address bar. If you use XMLHttpRequest, your page’s URL remains the same, whatever else you do. Do you need permalinks for your content? XMLHttpRequest can't deliver.
  • Using XMLHttpRequest involves a lot of DOM Scripting (especially if you think I am correct when I say that data should be passed as JavaScript over XMLHttpRequest). This is a problem for me, mostly because I use XMLHttpRequest in conjunction with XHTML. A command as simple as element.innerHTML won’t work with XHTML. You’ll have to get into the mess of document.createElement, element.setAttribute and element.appendChild way too often. There are a couple of tutorials on the web to explain these concepts, but mostly you are on your own.
  • XMLHttpRequest doesn’t come without its differences between browsers and their implementations. I don’t mean to scare the one’s who are starting – it’s really not that difficult. But just don’t expect one piece of code to work everywhere. We haven’t reached markup, styling and scripting nirvana yet.
  • I’ve come across a lot of browser bugs in the implementation of XHTML and XMLHttpRequest scripting. I use Firefox, and (ok, probably after Opera) I think Firefox’s support for standards is pretty good. Yet, Firefox catches me off guard sometimes. And don’t get me started about other browsers yet, including the most popular one. (By the way, for the uninformed, XMLHttpRequest is NOT part of the standard – probably why Opera hadn’t supported it until recently.)
  • In the end, probably in a nerdy sort of way, XMLHttpRequest is fun. It makes entire page-loads unnecessary. It makes information easier and faster to get to. It makes the user happy. It doesn’t involve too many UI issues. (I thought a user will never be comfortable unless he sees a page refresh when he clicks on a link in the browser. Obviously, I am wrong.) If well planned, XMLHttpRequest rocks!
  • XMLHttpRequest is tempting to overuse. This not such a problem from the developer’s point of view. However, I am not so sure if it’ll really save you bandwidth in the long run. The problem is that, since users find it so responsive, users will like to pamper themselves with more data than they’d otherwise need. XMLHttpRequest makes it easy, and fast. Just what the users need to have a good experience. I think all those talks about XMLHttpRequest saving the developers’ bandwidth needs to be re-thought. I have no math to back me up, but this is what my hunch tells me.
  • Did I say yet that XMLHttpRequest rocks? It’s the best thing that happened to the web since the invention of HTTP itself. It’s the stuff that’s going to direct the development of the Internet itself for the next half decade. But like every other intoxicating substance, it could be a life-saver or a poison depending on how you use it.
I guess I have no point to make in the end – I am probably too drunk to conclude. (I’ve been celebrating a VERY close friend’s wedding engagement tonight.) But I hope I’ve put across the point that XMLHttpRequest is good and yet bad, in it’s own ways. And that is so much like every other programming tool.

It’s just a matter of what suits the job best. Flying a jet to work is too much power in your hands, but walking down to work is too little, if you know what I mean. You just have to use the right tool at the right time. XMLHttpRequest is just a tool – you have to decide when it’d be good and when it’d be impractical.

For those who are in doubt, XMLHttpRequest is a tool you are not going to be able to do without. I think that web-developers who still work with old style HTML are already on their way to losing out, but now web-developers who haven’t mastered XMLHttpRequest are on their way to meeting the same fate. Yet, it’s a tool like any other – it has to be used at the right time for the right purpose. Every other time, it’s probably more effort than it’s worth.

Monday, January 24, 2005

Jazz By The Gateway

Jazz By The Gateway

This pic was taken at the Gateway of India, during a jazz show as part of the Mumbai Festival. Performing at the show were some of the greatest living exponents of the art of jazz - Al Jarreau, George Duke, Ravi Coltrane and Earl Klugh along with members of The Thelonious Monk Institute of Jazz.

I didn't go for the show, if you had to ask. (I'd probably have got better pics if I had, but I was too late to get the tickets.) This is a Hail Mary shot taken from outside.

For readers who do not understand photography jargon, Hail Mary shots are where the photograph is taken with the camera held above your head, like in a crowd, without looking through the viewfinder. The camera settings have to be adjusted beforehand, and the frame is just plain guessed.

In this case, I was outside a barracade containing the show. I couldn't see this frame - I had a sheet of asbestos between me and the Gateway. I only held my cam above that barracade and shot.

(Ok, I promise that's the last post about the Gateway, its waters, or of the Mumbai Festival.)

Blog Of The Day

This is probably not a big feat, but the excitement is because this is a first.

This blog has been featured as the Blog of the Day for today, on BlogStreet India.

Cheers!

Wednesday, January 19, 2005

Mumbai Fest - Some Pics

There was a huge collaborative street art project at Kala Ghoda, Mumbai as part of the 10-day Mumbai Fest. I happened to be around, and get some pics. (The pics link to Flickr)

Pic 1

Pic 1

Pic 2

Pic 2

Pic 3

Pic 3

Pic 4

Support drugs. Do good karma.

Tuesday, January 18, 2005

Illegal To Display Feeds?

I came across (and I must say this) a really stupid post at The Trademark Blog, where the author accuses Bloglines for illegally reproducing his site's content.

It was brought to my attention that a website named Bloglines was reproducing the Trademark Blog, surrounding it with its own frame, stripping the page of my contact info. It identifies itself as a news aggregator. It is not authorized to reproduce my content nor to change the appearance of my pages, which it does.
I think this kind of thoughts either stem from a poor understanding of feeds, or maybe this guy is just looking for legal issues to talk about (he's a lawyer, after all), or both.

Bloglines might be putting their ads next to my content and earning money out of it. I don't think that constitutes "commercial use" of my content. They are providing a service, for which they are going to need money to run, and ads are a good way to get that done. The ad money I make off my site is barely anything, anyway. I don't mind if I lose an additional $2.05 to Bloglines, considering that my user-base is actually increasing thanks to services like Bloglines. I could think of it as paying them to keep my audience coming.

Besides, it's not like Bloglines is running a site that makes money thanks to my content being published there. The users of Bloglines choose to put a feed in their reader, and Bloglines just provies a means to read that feed easily. Bloglines makes money because they provide a great way for users to read a site's content, grabbing what the site's authors have willingly distributed for just that purpose.

Personally, I read a lot of feeds too, a lot of which I prefer to read in Bloglines and not on the original sites. This is because, with Bloglines, I can strip off the clutter on a site, and grab just the content of the site, in a font, style and layout that I am comfortable with, and that works with all the other 130 sites in my feeds list, consistently.

I did come across a similar problem recently, where a blogger had published one of my recent posts on his site, even though he gave me full credit for the post. I just sent him a friendly mail asking him to get my content off his site, and instead link to me if my articles are really important to his site's audience. Promptly, he removed that entire section off his site. (Apparently, it was his personal link blog, and he has now put that behind an authentication system ensuring that he's the only one reading it.) I am ok with him reading my feeds, really. I guess it would be a problem if he publishes my posts without asking me first. Publishing my post on his site, and reading my feeds are two different things, though. Obviously this blogger respected that when I brought it to his notice, and did the needful to correct the mistake.

In the end, is it illegal to view/display/publish feeds from a site? No, it isn't, considering that the site's author willingly distributed the feeds. Is it illegal to publish one single article on a site without the author's permission? It probably is, but more than that, it's unethical, and should be refrained from doing. Is it illegal to make money by providing a service that reads feeds? Definitely not!

I think I am thankful to Bloglines for making sure that I get a consistent flow of people reading my content, even if they don't ever visit my site. If I don't like Bloglines' idea of reading feeds, I should probably just stop publishing feeds. But that'll just crash my audience-size. In fact, I like the idea of Bloglines displaying my feeds so much, that I had written a post earlier recommending that people read my feeds through Bloglines, and I even added a permanent link on my sidebar making it easier for people to subscribe to my feeds with Bloglines.

How about that, Mr. Lawyer?

Sunday, January 16, 2005

Lights, Camera, Arches!

Lights, Camera, Arches!

Shot the other day at the Gateway of India. I had to clean up the monument a lot with my copy of Photoshop, because the government hadn't done a good job. (Thanks Kaushik, for patiently showing me how to play with Photoshop.)

Previously:

Wednesday, January 12, 2005

I Graduated!!!

News has just come in. I am finally a graduate.

Bachelor of Engineering (Electrical and Electronics). Aaaah! That feels good!

Meet me at my local watering hole tonight to join in the celebrations.

Why XML in XmlHTTPRequest?

The XmlHTTPRequest function was designed so that the client can get small pieces of information from the server without having to do a page refresh. Obviously, the developers thought that XML will be the format of choice for this data exchange.

Thank God they didn’t make XML validation necessary!

I say this because XML will not be the format of choice for this data exchange. Let’s see why.


Firstly, let’s see how the data flow will be in the case of XML over XmlHTTPRequest. A typical scenario is where the data lies in a database on the server, and the data is requested by the client. The steps in the process will be:
  1. Server reads the request, extracts the data from the database.
  2. Server prepares an XML document using this data.
  3. This XML is sent to the client.
  4. Client validates the XML for well-formed-ness.
  5. Client extracts data from the XML markup.
  6. Client calls functions/triggers events which use this data as parameters in some way or other.
For several reasons, this model is not optimum. The questions that come to the mind immediately are:
  • Is it necessary to prepare an XML document (step 2 above)? Isn’t that an extra step that is probably not necessary?
  • Client validation is necessary (step 4 above) to ensure that the XML parser can parse the XML. This puts heavy processing requirements on the client. Is this really necessary?
  • The data from the XML will then have to be extracted (step 5 above). This isn’t really a necessary step either if the data is not marked up as XML. So, then, is it really necessary to use XML?
Then there are more problems. In web applications, where XmlHTTPRequest is most likely to be used, speed is of utmost importance, and optimizing for fast response times will be imperative. In such cases, the amount of time taken for any kind of operation, especially if it happens over the network or the internet, has to be minutely scrutinized and optimized. In such cases, XML is probably not the best suited format for data exchange. XML files are heavy as far as sizes go. A simple text like “Hello World” (11 characters) can be transmitted over XmlHTTPRequest as 11 bytes of data. Marking this up as XML will increase this at least 4 fold (say, and that’s still a conservative estimate) even if whitespaces are stripped off the document, increasing download sizes four times, and reducing response by a factor of 4. Point is, XML comes with its overheads which cannot be tolerated in a web application.

So, in what format will data be transferred over XmlHTTPRequest? The simplest way is as a plain old string of data. This will be the most efficient as far as download speeds are concerned. You will download only 11 bytes where 11 bytes of download is required, not more. Besides, there won’t be any XML parsing or validation in the picture, making the application that much faster.

However, somehow, I don’t see data flowing over XmlHTTPRequest in the form of plain strings of data either. Instead, I think it’ll flow in the form of JavaScript code.

Let me use an example from the guy who deconstructed Google Suggest, and the way XmlHTTPRequest works there. According to him, the code returned by Google when you hit a keystroke looks like this.

sendRPCDone(frameElement, "fast bug", new Array("fast bug track", "fast bugs", "fast bug", "fast bugtrack"), new Array("793,000 results", "2,040,000 results", "6,000,000 results", "7,910 results"), new Array(""));

Basically, what this code does is call a function, with a bunch of parameters. The function called, sendRPCDone, is defined in client side JavaScript code. The parameters are generated depending on server side processing (probably from Google’s cached searches). Then, later in the code, this response text is run through an eval function to execute the function on the client side (eval(_xmlHttp.responseText);).

I think the eval function is really a boon here. It makes JavaScript as the language of choice over XML for XmlHTTPRequest. For who don’t know what eval does, it basically takes a piece of string, treats it as though it is JavaScript code, and executes it. The string could be dynamically generated by client-side code, or as in this case, returned from the server. Just what we need here!

To summarize, here’s why we’ll use JavaScript instead of XML when using XmlHTTPRequest.
  • Downloading XML data will take more time as compared to JavaScript. Depending on the amount of data to be transferred, this difference might be huge.
  • Use of XML will necessitate validation of the markup, so that a parser can read the document – a process that requires extra processing on the client-side. This is not necessary with JavaScript.
  • The XML data is not already in a position to call scripts. Client side code will have to handle that, based on event handlers or such. JavaScript, on the other hand will be ready to be executed on the client.
It's easy to see why XML will not be used as the format of choice over XmlHTTPRequest. A little thought later, JavaScript emerges as the winner as the format of choice for data exchange over XmlHTTPRequest.

Did someone say that JavaScript will be coming out of the closet this year?

Tuesday, January 11, 2005

Google Logos

You are probably aware of the fact that occassionally Google changes their logos to celebrate a festive occassion. These logos (called the Google Doodles) are really nice to look at. Go have a look.

What I didn't know (and I happened to notice it from somewhere) is that Google has different search sites for specific topics. Here's the list of some of them, with their logos:

I could'nt help notice the Google Microsoft Search logo. It is the only one with a non-white background. Somehow, it seems to be mocking MS with its sky and grassy hills combination.

Sunday, January 09, 2005

Reflections

Reflections

Reflections

Was spending a night out on the streets of Mumbai today with a friend's camera. I am totally not used to this camera but I managed to get an interesting shot or two. The cam is a Nikon Coolpix 3100 - the same series as mine, but only slightly less loaded.

This pic shows the reflection of the lights of Hotel Taj in the waters of the Arabian Sea, at the Gateway of India.

Thursday, January 06, 2005

Web Applications - The Wave Of The Future

Web applications are finally beginning to see the light of the day. I am putting all my money on this. I am sure this is the future of the web, application design, networked software architectures, maybe even desktops and operating systems itself.

Web based applications will make inroads into (almost) every kind of application we use today. In this post, I try to list out some of the benefits of using web based apps.

First, a quick introduction to web based apps, for those who just tuned in. Examples of web based applications are GMail, Bloglines, and OddPost. GMail is a full featured mail client that does everything that a desktop mail client does, only ups it a few notches. So does OddPost. Bloglines is a web based feed reader that is directly competing with desktop feed readers, and winning. These web apps are basically applications that run on a server, whose UI is exposed in the form of web pages. By the very nature of its architecture, the program logic lies in a central place (the server), and the UI is available to the client in a piece of software that has been around since the birth of HTTP itself (the browser). The use of JavaScript and the DOM is probably the single most important factor that has made these apps possible.

The use of JavaScript is very different from the way it was used in 1999. Back then, JavaScript was just a cool way to make mouseovers and cool (now irritating) dynamic elements on the page like mouse pointer trails and flashy text that change colors. JavaScript was used because it could be, not because it should be. At one point, I thought JavaScript was dead, indeed like many other developers must have thought. The use of JavaScript became an example of bad web design, and the language was pushed into the dark annals of the world of notorious web pages.

Now, though, JavaScript has matured. It is used just what it was designed for - creating dynamic changes on the client side without introducing a server refresh lag. A dynamic front end in a browser window has always been a problem for application design and is probably the only reason why web based applications have not yet seen the light of the day. Today’s use of JavaScript circumvents that problem.

Now that we can make web applications, here's why we should, and will, do it.

The application lies at only one place

The application logic lies on the server, unlike desktop applications, where the application logic lies on the user’s computer. Since there is only one working copy of the application, it’s much easier to distribute. In fact, distributing applications in the traditional way doesn’t make sense here at all, since the user doesn’t really get a copy of the app at all. All the user gets is the UI, which is really all he needs. Application distribution is a non-existent problem, which means it is as easy as it can get.

The user doesn’t need any software

All the user does is fire his browser and type in a URL. These days, browsers are a standard piece of software that usually gets bundled with the OS itself, so finding a browser on the client is not a problem at all, and really, that’s all the user needs.

The user isn’t the administrator

Typically, when a user has the need for an application, he has to prepare himself to be the administrator of the application. He will have to do the setup, running, maintenance and troubleshooting, and handle all the problems that come with it. But in the case of web apps, since the app is really not with the user at all, the user won’t have to worry about these problems. And really the user shouldn’t. That makes for a happy user. That translates into a good application. This, of course, is impossible with desktop applications.

The administrator is the application’s programmer

Yes, the load on the programmers is more. But if you compare that to the costs of making deployable applications and then managing a service team to help users install, maintain and troubleshoot the application, it’s easy to see which is cheaper, not to mention more efficient. A small team of programmers working at one central location to maintain, manage and troubleshoot applications makes more economic sense to app manufacturers.

The application makes no assumptions about the user

Ok, that is a small lie. The user is assumed to have a capable browser to handle the application. But really, is that such a big deal? Because of the very nature of HTTP, the application is platform independent. This means that the user can use any operating system he wishes to use, and that doesn’t make a difference. This is a huge change from the traditional “Designed for <operating system name here>” applications. The user doesn’t have 512 MB RAM? No problem. The user doesn’t have processing power? No problem. The user has an outdated motherboard? No problem.

Multiple versions is a thing of the past

When a new version of the application is out, every single user gets to use it instantly. Literally. How cool is that! Again, this is possible since the application is centralized, and there’s really only one running copy of the application. All version changes are instantaneous and the user doesn’t need to know how to upgrade. This also means that the app developers don’t have to worry about supporting legacy versions and backward compatibility is not an issue.

It’s lightweight

The user doesn’t really download the whole application before using it. (Take that, Java fans!). He doesn’t even download the whole UI. He only downloads the part of the UI that is required for his use immediately over the URI that he’s on. This makes the application very lightweight and hence responsive and fast. Even very complex applications can load in seconds, maybe less, even over a bad connection, thanks to this.

It’s portable

There’s nothing installed at the users end, so the user can use the app from any location. Any location means literally any location across the globe. You can use the application in your office, and then at home, and then from your vacation in Hawaii, and every time it works just as well, without any problems of portability.

It’s simple and trustworthy

No messy extra protocols or port numbers. If your firewalls lets you see web pages (and which firewall blocks that!) you can access the application. So the user doesn’t have to know a thing, or worry about it. He just fires up his browsers, and enters a URL. Simple. Even better, since the application works in the protected environment of the browser, the application can be trusted and can never mess with the user’s system. No more can an application “slow down your system”, or “crash every once in a while”. Those complaints are a thing of the past.

The app architecture is transparent to the client

So, the programmers think that it would be a good idea to have separate data servers? Maybe split up demanding processes and push them to different computers to handle the job? Maybe they need a whole bank of a hundred thousand computers working together to do the job? This kind of architecture can be designed for optimum load balancing between servers. However this is entirely impossible with desktop applications. With centralized applications, it’s not difficult to construct and maintain such systems. In any case, whatever be the server architecture, the user doesn’t need to know or care.

Sure, web applications can never do some things that desktop applications can do – far from it. You will never be able to do a defrag through a browser. You can never run 3D modeling software through a browser. Browser based apps cannot replace system programs at all. Not for a long time to come, at least. However, for application programs, the desktop’s days are numbered.

Welcome to the show. Today’s act: The beginning of the end of the desktop.


Update: This post is now available in Russian. (Thanks, Alexander Kachanov)

Wednesday, January 05, 2005

Bad Forms

A site I frequent has a nice little local weather update shown on its sidebar. Now, I get my local weather forecasts delivered on my desktop through a Konfabulator widget, The Weather, and have been reasonably happy with it. Also, considering that in the web design circles, not too many bloggers are from Mumbai, I thought it'd be a nice touch to add to my site.

After some digging, I discovered that my weather feeds are coming from weather.com. Over at their site, they have tools to display the current local weather on my site, and was very pleased with it at first glance, until I discovered that I have to register to use it.

I have to say, I have issues with registering for services, especially free ones. I can cite various reasons why I do not and probably should not register. Most web designers must know that users do not like to register on a site. If they don't, they don't make for good web designers.

Aaagh! Let me get to my point.

Have a look at this form I had to fill to register. Done? Now, let me list my peeves out.

  • The form says that fields marked with the red asterisk are required fields. Is it just me, or do you find it funny too that every field has this red asterisk next to it?
  • Really, do they need my first name, last name, date of birth (what?), and gender (whaaaaat???) so that I can sign up for a weather service?
  • The address (and its various fields) are also marked with that asterisk. However, the state I belong to here in India is Maharashtra, which isn't listed there. In fact, none of the Indian states are listed there at all! Wait, it's only American states listed there? Maybe the developers forgot that The Internet is a worldwide network, and America is not the only place in the world? Wonder why they hired developers like that! Required field, you say? What do I select now?
  • Oh, hang on. They have a option where I can change the country from USA to India. However, I found that only after I found the 'State' dropdown box. Smart layout, I must say!
  • Oh, I am wrong again. Changing the country is of no consequence to the list of states. Even if I select "India", the states shown are all American! Smart!
  • Here in India, our pin codes are 6 digits long. However, I can only enter 5 digits in that box they have on their page. Let me guess, America has 5 digit pins?
  • Notice, I have not talked about concepts like accessibility yet. Obviously (or, the impression that I get without doing my right click > view source is that) the form is not the least bit accessible to other kinds of browsers.
I must say, it is really irritating when companies like this just assume that their audience is only American. The Internet is not just about America, you know. Don't make me fill forms which require my local information, without doing a study of what local information looks like outside your locality. In any case, when designing a form, please don't assume that America is the world. And please do not ask for stupid irrelevant information, like my gender. By the way, would you also mind giving a little thought to this little thing called 'Usability', while you are at it?

Sorry weather.com. I do not like you any more. I know you'll probably say "So what?", but that doesn't help improve your image. Quite the contrary, in fact.

May the web design community use that form as an example of badly designed forms. When you see forms like this, you can put one hand your stomach, point the index finger of your other hand to the screen, tilt your head backwards slightly, and laugh out loud with your stomach bobbing rhythmically up and down.

Tuesday, January 04, 2005

Link Bin

Sunday, January 02, 2005

Tsunami Links

Here are some links related to the Indian Ocean Tsunami.

Predictions for 2005

I am making this post for two reasons.

  1. Everyone is making similar posts.
  2. It’ll be fun to come back to this list at the end of the year to see how correct I was.
So, here are my predictions about the web for the coming year.

People will understand why IE sucks

We know why IE sucks, but the regular user doesn’t even know of his options. Alternative browsers that provide a better surfing environment will make users turn to them. Firefox will lead this bunch, probably breaking the 50% browser share barrier.

Developers will still not take to XHTML

Developers will realize that XHTML validation is a lot of work and not really worth the effort. Web pages will continue to use HTML. We will use the lessons we’ve learnt from XHTML, like semantics, separation of code and content, CSS based design and accessibility driven design, but the web pages will continue to be invalid XHTML, and people will discover that the validation doesn’t really matter at all. XHTML will be used only by purists. It will remain a great idea that we can’t use yet.

JavaScript will take the stage

2005 will probably be remembered most for the use of JavaScript and related innovations. Yes, that happened in 1999 too, but it will be different this time around. Developers will understand how to use JavaScript in a fashion that keeps the page accessible to even incapable user agents. XmlHTTPRequest will probably be the hero of the show through this year.

More Web Applications

There’ll be more applications running off a browser over the Internet. I can already feel this happening with me – I probably use fewer desktop applications than web applications. More and more system independent applications will run off the browser.

Brower independence will be in vogue

Sites that work only in one browser will be looked down upon. Most such sites will alter their code to accommodate multiple browsers or otherwise be totally browser independent.

Mobile browsing will be more important than ever

In the long term future, I don’t see people using a desktop computer just to check mail or send an IM. Instead, such things can be done faster and more efficiently on a mobile device, like a phone. Good browsing capabilities for such phones will be very important, and we’ll see crucial steps being taking in that direction this year.

Most of these are rather obvious. But then, not always is the obvious the necessary consequence. Let’s see what happens at the end of the year.

Happy new year to one and all. Hope the year is great for you.

ShareThis