Firefox vs Internet Explorer… Who’s Really At Fault
There's been a lot of back and forth discussion lately over the reason for Firefox 220.127.116.11. It seems that Thor Larholm released details on a vulnerability where data passed to Firefox from IE via the firefoxurl protocol handler can be used to execute arbitrary commands. Thor had said this was a vulnerability in IE to not properly parse the data it was passing and the IE team responded that they weren't responsible for the data being passed and instead it was up to Firefox to properly parse the data. The Firefox team released a patch (Firefox 18.104.22.168) which prevents Firefox from accepting bad data passed in from Internet Explorer and Window Snyder commented on the Mozilla Security blog. A second post was written by Asa Dotzler which questioned the IE Team's comments that it was too difficult to provide proper protection against this on their side. All of this was responded to by Jesper Johansson. So now that we've laid out the background... let's discuss.
I think that Microsoft, and specifically the IE Team's claim that it would be too difficult for them to patch this issue is bogus. There's no reason why IE could parse the URI before passing it, escaping certain characters, perhaps based on the URI RFC's definition of reserved and unreserved characters. That being said, as Jesper pointed out, Firefox doesn't do this... so perhaps both browsers should. While it may not be a standard practice, if the two major browsers were to perform this operation, others would follow suit. Should Firefox not accept bad data? Yes, however the entire concept of multi-tiered security is that you have a moat, a wall and other levels of defense. Assuming that Firefox's change to parse and sanitize the data is the wall, why couldn't IE provide a moat with the protocol handler acting as the drawbridge that allows access?
So this says to me that it's the responsibility of the creator of the URI to ensure that it is "correct" and "valid". The problem is that the reliance is that nothing will ever be used maliciously. Software is continually updated to deal with people passing invalid data or using it incorrectly. This is another example of someone using a standard incorrectly for malicious means, so why not modify the other end and stop relying on the user to form something that is valid and correct. I'd say this is no different than the browser tests performed by software like Hamachi (IE was patched against problems that were found by Hamachi). Yes, in that case the browser was rendering the HTML, well why not make the cleaning or "normalizing" of the URI's a part of HTML rendering. Larry went so far as to say that in his email.
You could, of course, suggest that HTML 5.0 or XHMTL 2.0 or whatever define the HREF or SRC attributes as containing a ‘URI like thingie’ and define the interpretation of the IMG or A elements as requiring normalization (according to a supplied algorithm) first. But then that would be in the domain of those languages.
So should we point the blame at Firefox or IE? How about both and none all at the same time. Instead of bickering about what's right, both sides should, as Firefox has already done, remedy the portion of the problem that they are responsible for. Perhaps now it's time to sit down and redefine the HTML, XHTML or something else to provide protection for the user against malicious individuals instead of saying, "Sure it's a problem, but we're not doing anything that violates the RFC so it's not our problem to deal with." When it affects your users... it's your problem, so let's all deal with it.