Home > IT > Firefox vs Internet Explorer… Who’s Really At Fault

Firefox vs Internet Explorer… Who’s Really At Fault

There's been a lot of back and forth discussion lately over the reason for Firefox 2.0.0.5. It seems that Thor Larholm released details on a vulnerability where data passed to Firefox from IE via the firefoxurl protocol handler can be used to execute arbitrary commands. Thor had said this was a vulnerability in IE to not properly parse the data it was passing and the IE team responded that they weren't responsible for the data being passed and instead it was up to Firefox to properly parse the data. The Firefox team released a patch (Firefox 2.0.0.5) which prevents Firefox from accepting bad data passed in from Internet Explorer and Window Snyder commented on the Mozilla Security blog. A second post was written by Asa Dotzler which questioned the IE Team's comments that it was too difficult to provide proper protection against this on their side. All of this was responded to by Jesper Johansson. So now that we've laid out the background... let's discuss.

I think that Microsoft, and specifically the IE Team's claim that it would be too difficult for them to patch this issue is bogus. There's no reason why IE could parse the URI before passing it, escaping certain characters, perhaps based on the URI RFC's definition of reserved and unreserved characters. That being said, as Jesper pointed out, Firefox doesn't do this... so perhaps both browsers should. While it may not be a standard practice, if the two major browsers were to perform this operation, others would follow suit. Should Firefox not accept bad data? Yes, however the entire concept of multi-tiered security is that you have a moat, a wall and other levels of defense. Assuming that Firefox's change to parse and sanitize the data is the wall, why couldn't IE provide a moat with the protocol handler acting as the drawbridge that allows access?

I actually went so far as to fire off an email to the authors of the URI RFC and at this point I've heard back from Larry Masinter. He provided additional clarification and insight into the RFC, providing an example of where responsibility lies if URIs were constructed on the fly by JavaScript.

For example, I could write a Javascript program that constructed URIs on the fly and inserted href’s to them; it would be the responsibility of the Javascript program to construct the ‘correct’ URI. However, HTML 4.0 and XHTML 1.0 don’t supply any justification for trying to ‘normalize’ URIs before they’re sent to the URI handler, except perhaps the suggestion in the appendix that non-ASCII characters be expressed in UTF8 and then percent encoded.

So this says to me that it's the responsibility of the creator of the URI to ensure that it is "correct" and "valid". The problem is that the reliance is that nothing will ever be used maliciously. Software is continually updated to deal with people passing invalid data or using it incorrectly. This is another example of someone using a standard incorrectly for malicious means, so why not modify the other end and stop relying on the user to form something that is valid and correct. I'd say this is no different than the browser tests performed by software like Hamachi (IE was patched against problems that were found by Hamachi). Yes, in that case the browser was rendering the HTML, well why not make the cleaning or "normalizing" of the URI's a part of HTML rendering. Larry went so far as to say that in his email.

You could, of course, suggest that HTML 5.0 or XHMTL 2.0 or whatever define the HREF or SRC attributes as containing a ‘URI like thingie’ and define the interpretation of the IMG or A elements as requiring normalization (according to a supplied algorithm) first. But then that would be in the domain of those languages.

So should we point the blame at Firefox or IE? How about both and none all at the same time. Instead of bickering about what's right, both sides should, as Firefox has already done, remedy the portion of the problem that they are responsible for. Perhaps now it's time to sit down and redefine the HTML, XHTML or something else to provide protection for the user against malicious individuals instead of saying, "Sure it's a problem, but we're not doing anything that violates the RFC so it's not our problem to deal with." When it affects your users... it's your problem, so let's all deal with it.

Categories: IT Tags:
  1. July 22nd, 2007 at 05:25 | #1

    I don’t think I would come to the same conclusion you did from my remarks.

    Sure, software that creates data is responsible for creating correct data. But the URI specification is pretty broad, and it’s not even clear that the interface in question isn’t more liberal than the RFC.

    Any software that accepts data from another source is responsible for insuring that the data doesn’t cause a failure or a security vulnerability. Nothing that any standards group might say could reduce that responsibility.

    “Normalization” of URIs in some circumstances is quite undesirable, and should be avoided until it’s necessary, preferably at the endpoints of the communication (when it is constructed and when it is parsed) rather than several times during intermediate phases.

  2. July 22nd, 2007 at 11:45 | #2

    That’s for the further clarification Larry,

    I agree 100% that the recipient should validate the data, however as I said I don’t see a reason why you can’t have multi-tiered defense and the one passing the data can’t do the same. It may be undesirable but I see potential benefits in this case.

  3. Harry Johnston
    July 22nd, 2007 at 14:13 | #3

    I don’t see what you mean by “the URI specification is pretty broad”. It makes it perfectly clear that only a specific set of characters are permitted unencoded in a URI; quote marks are not one of the permitted characters.

    As far as I’m concerned, Windows/IE should not hand an illegal URI (such as one containing unencoded quote marks or spaces) to a registered URI handler; that’s just common sense.

  4. Harry Johnston
    July 22nd, 2007 at 18:54 | #4

    Further to my previous comment, it shouldn’t be assumed that the correct response to an invalid URI should be to normalize it and pass it to the handler. I think there’s a good case for simply rejecting illegal URIs outright.

    That is, when the user clicks on the link – whether it is embedded in static HTML, generated by Javascript, or whatever – IE could simply advise the user that the link is invalid and refuse to attempt to follow it.

  5. Harry Johnston
    July 23rd, 2007 at 23:37 | #5

    Actually it turns out that this wouldn’t help, because the Windows specification for registering URI handlers requires that URIs be decoded before being passed to the handler. This means even a legal URI could take advantage of the vulnerability.

    So Microsoft is right in saying they can’t fix the problem … not without potentially breaking third-party software that depends on the documented behavior.

    Of course decoding the URI before passing it to the registered handler is a silly thing to do, but it’s probably too late to change now.

  6. Harry Johnston
    July 23rd, 2007 at 23:38 | #6

    Sorry, forgot the link:

  7. Harry Johnston
    July 23rd, 2007 at 23:38 | #7
  1. No trackbacks yet.