Monday, December 29, 2008

Function declaration vs. function assignment in JavaScript

I'm surprised how much I learn about JavaScript every single day.

It all started with a post that's wrongly titled to indicate a bug in Internet Explorer. The comments are an interesting read, where people go ahead to suggest that this "bug" exists in all browsers except Firefox. Then, they discover that it's actually a bug in Firefox, and not the other way round. Then a bug report is filed with Firefox. Brendan Eich comes along and resolves it as an invalid bug, with a brief explanation.

To explain, here's what's happening:

Ned Batchelder (the poster of that blog post) came across this piece of code that worked differently in IE and Firefox.

function really() { alert("Original"); }
if (0) {
alert("No");
function really() { alert("Yes, really"); }
}
really();

The explanation is pretty simple. A function really is defined which alerts "Original". Inside an if block which should never get called, the really function is redefined to alert "Yes, really". Since the code block is inside an if that should never get called, the assumption would be that the function shouldn't get redefined. This is how Firefox behaves. However, not so with IE, which redefines the function. Weirdly, the alert("No"); is never executed, even though the function gets redefined.

Turns out, IE is not the only one doing this. All webkit browsers and Opera also exhibit this behavior. Kris Kowal was the first to suggest that this is a bug in Firefox for the right reasons. So, a bug report was created and BE closed it. Here's his explanation:

ECMA-262 Edition does not specify functions in sub-statements, only at top level in another function or a program (see the normative grammar). Therefore this bug is invalid as summarized.

Moreover, we have extended ES3 (as allowed by its chapter 16) for ~ten years to support "function statements" whose bindings depend on control flow. So this again is invalid. There's a bug to dup against, if someone finds it feel free to change resolution accordingly.

/be

For those who didn't understand, here's a simpler version of what BE said.

  • function really() { ... } is a function declaration.
  • Function declarations can only exist at the top level, or inside other functions. They cannot be inside the if statement.
  • If the browser finds that it is inside the if block, the browser is not allowed to throw an error. Instead, it should handle it gracefully.
  • There's no spec that defines how such situations should be handled, so browsers are free to do what they want to.
  • All browsers except Firefox first scan the code at load time to find function declarations. When doing so, they look at the function first, and then it's redefinition. In this case, the original function should be discarded, and the new function should be used.
  • Firefox does this at runtime, evaluating the functions when the code execution sees it. Hence Firefox doesn't redefine the function at all.

So, BE's explanation that this is not a bug is valid. This is a gray area in the spec, and the browser is free to implement it the way it wants.

Also, and more interestingly, BE brings up function statements which do not follow this rule. A function statement would look like this:

var really = function() { ... };

Now, the spec very clearly says that function statements should be evaluated at runtime based on control flow - stuff like that if statement. Ned wouldn't have run into this problem if he was using function statements.

I learn new things about JavaScript every day!

Friday, November 21, 2008

HTTP cache optimization

You know what else Twitter is good for? Quotations. As good friend and colleague, Piyush Ranjan said:

Recession is [a] good time for innovation. Best innovation happens in crunch situations. Otherwise people just throw money at the problem!

At Cleartrip, as with any other company, we're always looking for ways to reduce our costs. Now with the travel industry in India hit hard, and the general economic slowdown, it's very interesting for us techies to innovate in ways that will help us cut costs.

One the recent things we've been working on, and the stuff I want to talk about here, is the area of bandwidth savings. The benefits of bandwidth savings are usually twofold. Firstly and obviously, it reduces our bills. Secondly, it usually manifests itself as a better experience for the user, since he doesn't have to wait as long for stuff to transfer from our servers to his browser every time he moves between pages.

Now, our site is a fairly dynamic site. Almost every page in the site (well, the significant ones at least), are so dynamic, they cannot even be cached for a couple of minutes. So, caching HTML doesn't suit us. What we can cache is everything else - images, CSS and JavaScripts. Also, since we try to adhere to web standards as much as possible while being as Web 2.0 as possible (whatever that means), our use of CSS and JavaScripts is pretty heavy.

Phase 1

So, a couple of months ago, we made our first set of optimizations. We noticed that our images change very rarely - some of them hadn't changed since we first made them. There was no point sending down copies of the images to our users frequently. These could be cached pretty aggressively. We decided on an arbitrary time of 1 month for the caching of images.

Next came the JavaScript and the CSS. This posed a considerable problem. We keep doing JS and CSS changes very frequently on our site - as frequently as a couple of times a day. We couldn't afford to have them aggressively cached at the client end. We needed the agility to update our users' cache with the new versions of our code. Again, arbitrarily, we decided that the cache time for JS and CSS files would be 2 hours. That way, it wouldn't be in users' cache very aggressively, while remaining in his cache for about the length of his usage session. At the same time, it would take at most 2 hours for our changes to propagate to all users.

So, with this configuration change rolled out (it's not at all hard to do if you understand caches well) we sat back and monitored our savings. Turns out, the savings were pretty amazing. Hrush, our founder, even blogged about how our data center providers were surprised to the extent that they said that it was "just not possible" that we have such low bandwidth consumption for the traffic we get!

The problems

How ironic our data center guys said that, because just around that time, we were looking at how we can optimize bandwidth usage even more. And why did we decide to optimize further?

  • Because you can
  • Some of our files never changed at all - core JS libraries, for example. Why bother downloading them again after 2 hours?
  • Because this setup was hard to work with. Remember I said we patch stuff to production very frequently? That it takes two hours to propagate meant that if it was a bug fix, it might be broken for two hours even after we've put up the patch.
  • Because it gets very hard to make changes that have dependencies with other resources. For example, if the JS needs certain CSS changes, and certain markup changes, in what order do you put the changes up on the server?

This usually meant that we had to plan in advance and we would still have some backwards-compatibility cruft in our JS to manage this 2 hour transition. Sometimes it would take an entire day to make even a minor patch since it had to be batched with breaks of two hours. One quick-fix would be to rename the file and all references to it, but it's easy to see how painful that is to do on a file-by-file basis, not to mention that we'd run out of file names pretty soon!

Phase 2

One thing for sure, though. If we could somehow manage the file name issue, we can solve all the problems I listed out. That's because every time we made a patch with a different file name, it would instantly be available to all users, irrespective of their cache status. This gives rise to another interesting possibility: we didn't have to restrict ourselves to the 2 hour window anymore. We could potentially increase our cache expires time to 20 years for all that matters. That should give us MUCH more saving than we get currently, and our users will download as little as necessary.

Except, we didn't know yet how to solve the file name problem gracefully. Ideally, there shouldn't be a human being deciding which files have changed, what the file names are, and going all over the place and changing them. That would make it tedious and error prone, not to mention boring. Then of course you have to think about how these file name modifications would work with source control. You don't want to mess up your clean source code management history.

The solution to the source control mess thing was rather easy. We fake the file name. Here's what we do. We take a file name and append a random number to it. This will make the client believe that the file name has changed, and will negotiate with the server for the file. Meanwhile, at the server, we could have a rewrite rule that transforms the file name to something that maps to a real file on our server. Sounds simple enough. Tried it, and it worked like a charm.

Now, to crack the real problem - how do we generate these numbers in a sensible way. The number essentially had to be such that it would never repeat (at least for any given file), it would be global in that two pages shouldn't be using different numbers for the same resource, and when the number changes it should instantly reflect site wide. Now that we knew how the number should behave, the quest was on to come up with a mechanism to generate these numbers.

The solution

After a lot of thinking, we had a shameful duh moment, and it suddenly all made sense. We didn't need to invent these numbers at all. We just needed to use our source-control revision numbers! The revision numbers match all the characteristics of the number we want. Why bother with complex systems to generate and track numbers when it was already available, even if very disguised.

I'll save you the implementation details about how we made this available site wide, and how we made it possible to have instant global changes to this number. That definitely wasn't the tough part, and I'm sure you'll figure out the details. Who knows, maybe Piyush might just release a plugin for Rails to do it automatically for you. However what surprises me is that it's very hard to find such gems of knowledge on the net. I'm now beginning to think that maybe this design pattern should be used for distribution of all static resources on the web. We're definitely not the first to invent this pattern - why is no one else talking about it?

We've only just rolled this out on cleartrip.com and are still to get a decent sample of data to see how it has impacted our bandwidth consumption. But any fool could guess that our bills should reduce significantly with this change.

Wednesday, November 12, 2008

Joel on Project Managers

This quote is long, in typical Joel On Software style. He said this in his latest podcast over at Stack Overflow with Jeff Atwood. I've ripped it off the transcript wiki. Definitely worth a read.

Here's my feeling about project managers:

One of the things that is interesting is that project managers, traditionally, are brought on because you have a team of yahoos - and this is just as true in construction, or in building an oil rig, or in any kind of project as it is in the making of anything - making a new car at general motors, or designing the new Boeing 787 dream liner - as it is in the software industry. Project managers are brought in because management says: "Hey, you yahoos! You're just working and working and working and never get the thing done and nobody knows how long it's going to take." If you don't know how long something's going to take and you can't control that a little bit then this really sucks from a business perspective. I mean; if you think of a typical business project - you invest some money and then you make some money back. The money you make back - the return on investment - might be double the amount of money you invest and then it's a good investment. But if the investment doubles because it took you twice as long to do this thing as you thought it would then you've lost all your profit on the thing. So this is bad for businesses to make decisions in the face of poor information about how long the project is going to take and so keeping a project on track and on schedule is really important.

It's so important that they started hiring people to do this and they said: "OK, you're the project manager - make sure that we're on track." These project managers were just bright college kids with spreadsheets and Microsoft project and clipboards. They pretty much had to go around with no authority what so ever and walk around the project and talk to the people and find out where things were up to and they spent all their time creating and maintaining these gigantic gantt charts - which everybody else ignored. So the gantt charts, and the Microsoft project, and all those project schedules, and all that kind of stuff, was an artifact created by a kind of low level person. Although it might be accurate depending on how good that low level person was, but it was still an output only thing from the current project: Where are we up to? What have we done? How much time have we spent? What's left? Who is working on what?

Then, for some reason, these relatively low level people, who were not actually domain experts, (if they were at Boeing they don't know anything about designing planes, if they were on the software team they're not programmers - they're project managers, and they don't know anything about writing code) they start getting blame when things went wrong and they started clamoring for more responsibility, more authority to actually make changes and to actually influence things and say: "Hey, Joe's taking too long here - we should get Mary to do this task, she's not busy." The truth is that they started getting frustrated because they were low level secretarial-like members of their teams and they wanted to move their profession up the scale so they created the project management institute - or whatever it's called - and they created this thing called... ah, I don't even know! But they created a whole professional way to learn to be a professional project manager and they decided to try to make it something a little bit fancier than just the kid with the clipboard that has to maintain these gantt charts all day long. You can tell this is what happened because the first thing project managers will tell you about their profession is that the most important thing is that they have the authority to actually change things and that they are the ones that actually have all the skills that can get a project back on track, or to keep a project on track, and therefore they need to have the authority to exercise these skills otherwise they'll never get anything done, they'll never be able to keep the project on track, they don't just want to be stenographers writing things down.

The trouble is, they don't actually have the domain skills - that's why they are project managers. If you are working on a software project you know how to bring it in on time and you've got to cut features, and you know which features to cut, becuase you understand software intrinsically and you know what things are slow and what things are fast and where you might be able to combine two features into one feature, where you might be able to take a shortcut. That's the stuff a good developer knows, that's not the stuff a project manager knows. In a construction project it's the architects and the head contractors who know where shortcuts can be taken and how to bring a project in on time not the project manager. The project managers don't have any of the right skills to affect the project and so they inevitably get really frustrated and everybody treats them like secretaries, or treats them like 'annoying boy with clipboard', when they really don't have a leadership role in the project - and they're not going to be able to because they don't have the domain expertise. No matter how much they learn about project management, no matter how many books they read, or how many certificates they get, no matter how long they've been doing project management: if they don't know about software, and software development, if they don't have that experience, they are always going to be second class citizens and they're never going to be able to fix a broken project.

Wednesday, October 08, 2008

Understanding eval scope. Spoiler: It's unreliable!

Today, I ran some tests to help me understand the scope in which an eval runs. Turns out, like so many things in the browser world, it's very unpredictable and exhibit different behaviors in different browsers.

Let's start with the following snippet of code. I've added comments to demarcate areas in the code, which I will be changing with each iteration.


var foo = 123;
var bar = {
changeFoo: function() {
// We'll keep changing the following snippet
alert(this);
eval("var foo = 456");
// Changing snippet ends
}
};

bar.changeFoo();
alert(foo);

A little explanation of the code above. foo is a variable in the global scope, and it's value is set to 123. An object bar is created with a single method changeFoo which does an eval. The eval creates a local variable (thanks to the var) foo, and sets it's value to 456. bar.changeFoo is called, and the value of the global foo is alerted.

The aim is to test the scope in which eval runs. If eval is in the global scope, the global variable foo should change it's value. If eval is in the local scope, the global foo should be unaffected. Then there are various things we can do inside the changeFoo method which should keep altering the scope of this, so we are also alerting this to see what happens.

The findings are listed below:

 Changed snippetInternet ExplorerSafari 3.xFirefoxGoogle ChromeSafari Nightlies
  foothisfoothisfoothisfoothisfoothis
1
alert(this);
eval("var foo=456");
123object123object123object123object123object
2
alert(this);
window.eval("var foo=456");
123object123object456object123object456object
3
alert(this);
this.eval("var foo=456");
errorobjecterrorobjecterrorobjecterrorobjecterrorobject
4
alert(this);
eval("var foo=456", window);
123object123object456object123object123object
5
(function() {
alert(this);
eval("var foo=456");
})();
123object123window123window123object123window
6
(function() {
alert(this);
window.eval("var foo=456");
})();
123object123window456window123object456window
7
with(window) {
alert(this);
eval("var foo=456");
}
456object456object456object456object456object
8
with(window) {
alert(this);
window.eval("var foo=456");
}
456object456object456object456object456object

What I think of these results:

  • I don't know what Firefox is doing in case 2, and for some reason Safari Nightlies seem to be following it. Maybe it's just beyond my understanding, but case 2 is not supposed to be different from case 1. Why does case 2 operate in global scope? If window.eval is different from eval, case 3 shouldn't all have given errors. Someone please help me understand that $hit.
  • Case 4 makes sense, but that's a non-standard behavior in Firefox. Understandable that no one else exhibits it.
  • IE baffles me in case 5, and Chrome seems to ape it. In this scenario, the anonymous function is supposed to have the global scope - so, in this case, this should point to the window. WTF is happening here!
  • Consistent with case 2 above, Firefox and Safari Nightlies continue to display weird behavior in case 6. For some reason, in these two cases, the eval operates in the global scope.
  • Now, I have no idea why, but only cases 8 and 9 seem to really work at all. This is despite Doug Crockford going on and on about not using with constructs. It's also despite being beyond (my) understanding about why the with should make any difference to the eval, since eval is part of the window object.

All in all, if you are going to be evaling JavaScript (not JSON), and you want the eval'd code to run in the global scope, you should use the with block around the JavaScript snippet. Or else, you can lose a lot of hair handling cross-browser issues.

Hope you don't lose as much hair as me.

Wednesday, September 17, 2008

Housekeeping

I've done a couple of minor changes to this blog's UI. Nothing dramatic, but I thought I should draw your attention to it, especially if you are one reading only from my Atom/RSS feed.

The first change came from realizing that my posts are longer than they should be. Somehow, I can't seem to compress them beyond what I already do. Long pages are a pain the wrong spots to read, so I decided to expand Douglas Bowman's original layout to a fluid width page, so that my posts consume lesser vertical space. (I just noticed that Steve Yegge has done similar fixes - I'm flattered.)

Secondly, and sort of to compensate for my lowered rate of posting on this blog, I've included two new feeds you can subscribe to on this blog. You can find latest updates to these feeds in the right sidebar. First is my feed from del.icio.us, which happens to be my bookmarking service of preference. Secondly, is my list of shared RSS feeds from my Google Reader. Both of these feeds get updated more frequently than my blog itself, so you might find these interesting.

This way, not only do you keep in touch with what I'm writing, you can also keep in touch with the stuff I'm reading. Expect to find some tech humor in these feeds too ;). Here's the rule of the thumb: The two new feeds reflect what I'm thinking strongly about, but I do not necessarily have an opinion about. My blog is my list of opinions. That's the difference.

I've been testing these thingies on my blog for some time now, so if you've been around here recently, you might have already noticed these changes. I just thought I'd wait a bit before I can announce these new things - turns out Google is doing a reasonable job of ensuring these things work, after all. ;)

Wednesday, September 03, 2008

JavaScript for Linux Hackers

Two weekends ago (24 August), I gave a talk about JavaScript at the Indian GNU/Linux Users Group of Mumbai (ILUG-BOM) - A small gathering of Linux hackers from around the city. The talk was held at the Homi Bhabha Centre for Science Education (HBSCE), TIFR, Mumbai.

It was very weird having to explain JavaScript to kernel hackers and sysadmins. It entails a different approach - one where you have to assume that the audience knows a lot already, probably more than you in some respects. They are not one to get wowed by browser effects and visual fanciness. Also I know very little about Linux systems, so we had very little in common. It's very challenging preparing for such an audience.

I spoke about the language, it's history, it's expressiveness, the type system, variable casting, objects, marshaling objects, it's lambda nature, and several language constructs, especially functions. What I didn't cover was things like the DOM, inheritance patterns and constructor functions, but there has to be something for next time, right? ;)

I think it went pretty well. Well enough for them to invite me for another session where we could cover the left-out topics. I'm sorry - I would have put my slides on slideshare or something, but honestly, when I was running through my slideshow the day before the presentation, I nearly dozed off. So I decided that I'd just do the presentation without the slides - just me talking, and a JavaScript console on the screen.

Friday, August 29, 2008

element.hasFocus? document.activeElement!

Today at work, in some code I was writing, I wanted to know if a given input box has focus or not. Turned out, this is surprisingly difficult. The input element doesn't have any hasFocus or similar property! No wonder working with the DOM keeps tripping people up!

Turns out, from quite some time now, Internet Exploder has been supporting a proprietary property - document.activeElement - which tells you which element is current focussed. Exactly what I needed. Except this is proprietary - I tried in Firefox 2, and it didn't work as expected. However, fortunately, turns out the HTML 5 spec has now incorporated this new property of the document object, and all future browsers should support it. Firefox 3 already supports document.activeElement. I've read that Opera supports it, but haven't tried. Safari does not, but the latest nightlies support it as well. Of course IE6 and IE7 support it perfectly well - IE invented it after all. So, of the big browsers, only Firefox 2 and Safari are problematic.

Since my code was not super critical, I've decided to skip support for just this bit for Firefox 2 and Safari. In any case, I'm hoping (against hope?) that both these browsers have a faster upgrade cycle than others, so they'll be outdated pretty soon.

Just in case you were thinking that the focused element can easily be 'discovered' by using the onblur and onfocus events, think again. Firstly, according to the specs, the focus and blur events don't bubble, so you can't use event delegation to capture all focus/blur events on the document. In any case, if you could use event delegation, putting these event handlers in an external script would mean that the script will kick in after a little delay - either after the script has loaded if the script tag is placed at the bottom, or after the page load happens if you are waiting for the DOM to be ready before you can use it, which in my case wasn't acceptable. The only other solution is to have a script include or a inline script tag that declares a function before the DOM is loaded, and then have inline onblur and onfocus event handlers DOM Level 0 style. That's just plain ugly. None of these solutions are either workable or graceful.

Wednesday, August 13, 2008

Brendan Eich on code translation to JavaScript

I've always hated the idea of some server-side language (like ASP.Net, GWT, RoR, what-have-you) generating JS code to run on the client. I'm glad at least Eich agrees with me, as is published in this interview. (Jump to page 3 for this excerpt.)

I did not intend JS to be a "target" language for compilers such as Google Web Toolkit (GWT) or (before GWT) HaXe and similar such code generators, which take a different source language and produce JS as the "object" or "target" executable language.

The code generator approach uses JS as a "safe" mid-level intermediate language between a high-level source language written on the server side, and the optimized C or C++ code in the browser that implements JS. This stresses different performance paths in the JS engine code, and potentially causes people to push for features in the Ecma standard that are not appropriate for most human coders.

JS code generation by compilers and runtimes that use a different source language does seem to be "working", in the sense that JS performance is good enough and getting better, and everyone wants to maximize "reach" by targeting JS in the browser. But most JS is hand-coded, and I expect it will remain so for a long time.

That said, I think Eich doesn't highlight some of the other problems with server-generated JavaScript: ability to debug, potentially unoptimized output, and potentially inefficient code. I've worked with server-generated JavaScript in ASP.Net and RoR, and I know what a pain it can be.

Friday, June 20, 2008

IE Congratulates Firefox

... on shipping Firefox 3.0.


The comments from around the web are hilarious:
  • Don't eat it.
  • Is it not poisoned?
  • That cake will give you CSS rendering errors, in your colon.
  • Moments later the cake hit a knife/plate standard it wasn't compliant with.
  • It must have been hard for them to put "love" on the cake
  • As you eat away the icing, you'll see that the cake is blue... little by little, it's becoming clear... it's the blue screen of death!
  • Congratulations on shitting the IE7 Team
  • ...did it come with an End User License Agreement...?
  • There should be some bugs inside the cake.
  • ...it's probably half baked too!
  • I guess that box was rendered in quirks mode.
  • Sheesh, even their cake has box model bugs.
  • I was wondering what the IE team had been doing for the past 3 years...
  • There's actually an IE team?
  • [Apple] snuck their cake through during the last Quicktime update.
  • And if you didn't like the candles, you had to replace the whole cake.
  • ...that the cake tasted pretty good, but as they started to dig in, they sadly realized that instead of a whole cake, they had actually gotten a thin layer of cake on top of a cake-shaped support structure made of toothpicks and glue.
  • Turns out they forgot to add sugar to the cake. That will be added on Patch Tuesday.
  • Good luck getting the recipe for that cake.

Tuesday, June 10, 2008

Simply use HTML. Not XHTML.

The good 'ol argument about HTML vs. XHTML seems to have resurfaced on the Internets. I have been firmly in the XHTML camp at one time, but I had made a mention in a previous post about why I think that there was no point in fighting this battle anymore. There is a clear winner - HTML.

Of the lot of recent posts out there about which is better than the other, this little thought experiment echoes my sentiments about XHTML vs. HTML.

You pore through the raw source code of the page and find what you think is the problem, but it’s not in your content. In fact, it’s in an auto-generated part of the page that you have no control over. What happened was, someone linked to you, and when they linked to you they sent a trackback with some illegal characters (illegal for you, not for them, since they declare a different character set than you do). But your publishing tool had a bug, and it automatically inserted their illegal characters into your carefully and validly authored page, and now all hell has broken loose.

The emails are really pouring in now. You desperately jump to your administration page to delete the offending trackback, but oh no! The administration page itself tries to display the trackbacks you’ve received, and you get an XML processing error. The same bug that was preventing your readers from reading your published page is now preventing you from fixing it!

The fact is today's web is one where content might pour in from various locations, many of which you might not have control over. It is important to inter-operate with these kinds of content sources. Expecting strictness from an external source is not only overkill, it's folly to do so.

I had faced this problem when I had worked on the Sacramento Kings website. The content was coming from various sources, some even as trustworthy (for them at least) as the NBA. However, content encoding and ill formed markup issues were huge enough to get the JavaScript all crazy. I can't even imagine the amount of problems I'd have faced if we'd have decided to use a XHTML strict, or even transitional, doctype for this job. How can we force a content author to ensure that his content validates, and that the reference validation for your site and the reference validation for the content author is the same?

Simply use HTML. Let the onus of making sense of the content lie with the browser. It's not a human's job to make content appealing to a computer. If a computer cannot understand it, it should work hard. Not the human.

Just for the humor, check out this page that a friend happened to hit when pulling up W3.org the other day. I know I'm being harsh when I say this, but the guys who made the standard can't seem to respect it.

W3 Parsing Error

Tuesday, June 03, 2008

Goa

Just got back from a trip to Goa with colleagues. Here are some of the pics:

Dog and ChairAltar
ResortAnurag



Head over to Flickr for all the pics.

Monday, April 21, 2008

Getting iSync to work with the Nokia N91 8GB

My phone is a Nokia N91 8GB, and my primary OS these days is the Mac OSX 10.4.11. When I followed instructions I could find anywhere on the web - the simple connect, launch, sync instructions - it simply didn't work. iSync kept complaining that it couldn't sync with my device.

That just meant it's time to get under the hood and do some hacking! If you ever face this problem, here's how you solve this issue.

  • Quit iSync if you already have it open.
  • Go to your applications folder, right-click on the iSync application icon, and select "Show Package Contents".
  • Keep clicking through to the following path: Contents/PlugIns/ApplePhoneConduit.syncdevice/Contents/Plugins/Nokia-N91.phoneplugin/Contents/Resources.
  • Once there, duplicate the MetaClasses.plist file just to have a backup. You can do that by right-clicking the file, and selecting "Duplicate".
  • Now, open the file with a text editor - I used TextWrangler, and search for the line that says <key>com.nokia.n91</key>. A little lower, you should find a <key>Identification</key>. Right below that should be a <dict> XML node.
  • Edit it to look as follows:
    <dict>
    <key>com.apple.usb.vendorid-modelid</key>
    <string>0x0421/0x042F</string>
    <key>com.apple.gmi+gmm</key>
    <array>
    <string>Nokia+Nokia N91-1</string>
    <string>Nokia+Nokia N91-2</string>
    </array>
    </dict>

That's it - you are set. Now, launch iSync and it should now be able to work with your N91 8GB as advertised.

The key here is that the N91 8GB model is identified as Nokia+Nokia N91-2, which isn't available in the supported devices list by default - the regular N91 is identified as Nokia+Nokia N91-1 or simply Nokia+Nokia N91, which is supported. The hack just adds support for the N91 8GB by telling iSync to treat it as the Nokia+Nokia N91-1 for all practical purposes.

Thursday, February 28, 2008

Web 2.0 Professionals

Got this job offer in my mail today:

COMPANY: [Company Name] ...with a reputation of being a leading supplier of networking equipment and network management for the Internet.

LOCATION: Bangalore

JOB OVERVIEW: We are looking for Senior Web.20 professionals on a very Urgent basis. Person with good relavant experience is most preferable.

EXPERIENCE: Min. 4 yrs to 12 yrs

POSITION: Permanent

If you a are looking for a change then plz fwd your updated word formatted (*.doc) CV to me ASAP with following details:

[snip]

5. Experience in Web 2.0

[snip]

Typos and poor language apart, can someone please explain what a Web 2.0 Professional is? What does he do? How in the world is he supposed to have 4 to 12 years experience? And how can he give details of his Experience in Web 2.0? Not to mention, I am very curious about how Web 2.0 is relevant to a leading supplier of networking equipment and network management for the Internet.

I guess this just shows that we are at the peak of the hype.

Some where in a board room:
How do we boost sales of our NICs and routers?
What we really need is some Web 2.0 action going for us.
What is that?
Don't you know? Everyone's talking about it. It's the greatest thing to have happend!
Ok. Get HR to find a Web 2.0 Professional. Make sure he has a decent amount of experience.

Tuesday, January 29, 2008

On X-UA-Compatible

There's been so much said about this...

... that if I write an opinion piece, it will go unnoticed.

So, what do I think anyway? If this is MS's only chance at fixing the web, I love the idea. However, this is a drastic step, and MS cannot botch this up. If they do, no one will want to work for their browser anymore. If no sites are written for their browser, users won't use their browser anymore. As a front-end developer, having to cater to three different browser types (IE6, IE7 and good browsers) with HUGE differences between them, is already a pain in the wrong spots. Adding one more to the mix will only worsen the situation. But if IE8 starts actually behaving like the good browsers, we can finally hope that all our problems will vanish with IE6 and IE7 - whenever that happens.

So, if MS thinks that this is the solution to all their problems, so be it. The world will comply this one last time. This is a lot of trouble. It better be worth it. If this ends up having a less-than-desirable result, MS is doomed. IE is doomed. And the web will be a better place anyway, IE or otherwise.

Thursday, January 24, 2008

ShareThis