Archive for July, 2006

Internet Explorer 7 and Automatic Update

Thursday, July 27th, 2006

So, maybe I missed the scoop. According to IEBlog, it turns out Internet Explorer 7, commonly referred to as IE7, will be released as a high-priority update via Automatic Installation. Or will it?


It had long been rumored that IE7 would be released via Automatic Updates. However, an interview with Chris Wilson posted on Think Vitamin on the 24th of July, 2006, the question Is IE gonna update to IE7 was answered as follows:

Is IE gonna update to IE7? So, I think the first thing, really, is that we can’t really force it on users… I mean, that is not our goal. We really like to offer users choice. It is a different user interface. Some people will be really jarred by that. I think that we certainly want to encourage everyone out there to… I do believe that we will offer it through Windows Update but it won’t be an automatic silent update. Certainly, it won’t be like one day you come in and suddenly your computer is running IE7 rather than IE6. Certainly, we have to ask the user if they really want it. As nice as it would be to blast it on to everybody’s system, I don’t think that can happen.

Chris Wilson is group program manager for Internet Explorer Platform and Security at Microsoft. He’s someone that I would suspect to be in the know. According to IEBlog, something changed in two days. IEBlog says the following:

To help our customers become more secure and up-to-date, we will distribute IE7 as a high-priority update via Automatic Updates (AU) shortly after the final version is released for Windows XP, planned for the fourth quarter of this year.

This has caused some confusion. The IEBlog post seems to suggest it was a recent change to their distribution plan. I’d hazard a guess that the change was in the making before the Chris Wilson interview but only trickled out from the official blog July 26th.

The announcement seems to suggest a delivery method that is a hybrid of the install-if-you-want it method Chris Wilson talks about and the silent-update he shuns. The update will download via Automatic Updates. The install won’t be silent. The user will be prompted whether the IE7 install should be ignored completely, put off until later, or installed. Also, the post claims that IE7 will be able to roll back to IE6.

So, it looks like the Internet Explorer Team has managed to snuggle up in the middle.

Some Thoughts

I appreciate their user choice attitude, but I think now is a bad time to apply it. Internet Explorer is the largest source of installed spyware and adware. The fact that the interface might be a shock to users accustom to IE6 is really less important than protecting them from themselves. But, I really, really, really want IE7 to be adopted as fast as possible so I don’t have to worry so much about IE6 CSS hacks. I mean that I’m biased.

The other odd quirk is that upgrading to IE7 will allow users to preserve your current toolbars shortly after it lauded the browser’s advanced security features… including ActiveX Opt-in, the Phishing Filter and Fix My Settings. So, what IE7 will be doing is allowing users to keep all their spyware and adware infested browser cruft despite all the security enhancements the Internet Explorer Team created to prevent people from installing malware. Good thinking, guys. I understand leaving favorites, home page settings, and other settings, but the typically malware-sporting toolbars should be left out. If the user really cares about a tool bar, he’ll go find it again. Blame it on incompatibility. FireFox does it with extensions all the time.

Finally, I’m worried about the roll back to IE6 thing. I’ve stories about how big of a headache it is to uninstall the IE7 betas. I would think an uninstall for a beta product would be high-priority if a company wanted people to test it. Apparently, this is not the case for Microsoft. I hope they improved the uninstall substantially for the final release.

Examining John C. Dvorak’s Anti-CSS Article

Tuesday, July 18th, 2006

We all know it. John C. Dvorak is a troll. He’s not just a troll. He’s an uninformed troll. In Why CSS Bugs Me, an article posted on June 12th, 2006, at the PC Magazine website, John C. Dvorak explains why CSS is useless.

The base assumption of his article is that he actually knows what he is talking about. John C. Dvorak is not a designer. He’s a tech columnist. His blog site, which he claims to be redesigning, is proof enough of that. It’s pretty clear that his inability to style a website should mean his opinion is worthless. Nevertheless, I found the article linked from CSS Insider. That means someone cared about his opinion.

I know I’m in for a treat when the opening line of an article is ended by an question mark and an exclamation point. Dvorak claims that none of this stuff works. That statement is absolutely wrong. A great deal of it works. In fact, almost everything works as expected. If I do a font-family:verdana, helvetica, sans-serif on a div tag, every important graphical browser that has been released since I started using the web will render the text in one of the listed fonts. It’s only when using advanced CSS that problems arise.

What John seems to be upset about is the PC Platform. Frankly, every major browser for the Mac is pretty standards compliant. Opera was a little bit of a problem before Opera 9 was released, but even Opera 8 rendered most things the same way Firefox and Safari did. The problem lies on the PC where one of the major browsers, which happens to be the most popular, is not standards compliant. That browser is Internet Explorer if you’ve been living in a hole. Microsoft even prides itself on how well it embraced and extended standards (for example, filters). The major problem for most designers is that the Internet Explorer rendering engine does the box model wrong. But, really, with conditional comments, making a bit of CSS for Internet Explorer is a trivial task.

The point is that he is picking on CSS instead of picking on the people who write the CSS rendering engines. He should be bad-mouthing Microsoft, not CSS.

I take that back. For a moment, he did take a strike at the defining principle of CSS: it cascades.

The first problem is the idea of cascading. It means what it says: falling – as in falling apart. You set a parameter for a style element, and that setting falls to the next element unless you provide it with a different element definition. This sounds like a great idea until you try to deconstruct the sheet. You need a road map. One element cascades from here, another from there. One wrong change and all hell breaks loose. If your Internet connection happens to lose a bit of CSS data, you get a mess on your screen.

I’ll ignore the part about the connection losing a bit of CSS data because I think it’s an asinine suggestion. I can only think of one situation where this would happen and it wouldn’t totally mess up the design if the CSS were written properly. What I want to address is the fact that what Dvorak is bitching about is how CSS was designed to work. The beauty of it is that I can declare a font-family on the body and it should cascade down into paragraph tags. I don’t have to explicitly set the value every time. Further, plain-old HTML had some amount of inheritance built in (for example wrapping a bold tag in a font tag will cause whatever is in the bold tag to inherit the font information). The cascading nature isn’t broken; rather, Dvorak’s understanding of CSS or his ability to use it is under developed. If he’s having trouble figuring out where the style is attributed from, it is probably the fault of the author or his ability to read CSS, not the fault of CSS. Well written CSS, like C or PHP, is quite easy to read if the reader knows what he is reading and the writer wrote it well.

John closes his article by throwing punches at the W3C.

And what’s being done about it? Nothing! Another fine mess from the standards bodies.

First, it’s not the W3C’s fault that the browsers don’t implement the standard properly. That’s like blaming law makers when criminals commit crimes. If there were only Mozilla and Internet Explorer, I might concede that the standard is unclear. However, Opera, Safari, and Mozilla typically agree on how a page should look. So, I can’t blame it on the specification.

Second, something is being done about it. Initiatives like the Web Standards Project have been busting ass for years to get browser makers to stick to standards. The web standards movement is making major footholds. The only major graphical browser that isn’t standards compliant is Internet Explorer. The IE7 team is working hard to make Internet Explorer more standards complaint. Pretty soon, the differences between how browsers render pages will be almost nil. The only difference will be how soon the browser makers integrate new CSS features like those in CSS 3 draft that are already being built into browser releases.

It seems John got his panties in a wad over this one. For once, I think this was a real rant rather than a calculated troll to boost the number of page views on his articles. Unfortunately, he comes across as a whining web design newbie. Those of us that have been doing standards-based web design for awhile have dealt with far worse that he is and we still appreciate CSS for the excellent tool it is.

If you want to read the article, it can be found on on PC Magazine.

JSON, An Alternative to XML in AJAX

Tuesday, July 18th, 2006

I admit it. I like Ajax. Sure it’s a dumb buzzword. But, as you’ll recall, in How To Make an Ajax Chat Room, I said something like, It just turns out XML is slightly more usable for complex stuff and that synchronous requests defeat the point. Though Ajat have many uses. The point I was trying to make is that sometimes sending text back instead of XML is nice. For example, I can send back a 1 if the action was successful. But sometimes that isn’t enough. Sometimes, text and XML are both the wrong tools for the job. That’s where JSON comes in.

The Problem With XML

The problem with XML is that it is a bitch to work with. XML files can get big if the file uses meaningful metadata. To boot, they are a pain in the ass to create when all you really want to do is send some simple data that is too complex for regular text. Then, when it comes time to use the XML in JavaScript, one has to do lots of DOM stuff just to get to the data.

Since I don’t do much PHP work, I don’t have to deal with the creation of the XML files. I do, however, have to access the data from the DOM. I got so irritated by it, that I wrote a function in my Ajax class that will convert simple XML files in to Javascript associative arrays. It turns out I was taking an unneeded step.

The Problem With Text

As I mentioned, sending text back is often useful. Since XMLHttpRequest is smart enough to recognize the content type of the returned code, I often send back errors as text and do something similar to this to display the error: if(!ajax_obj.responseXML){alert(ajax_obj.responseText);}. Sometimes, however, I return 1 to alert success and a string error message to give an error. While I’ve made this standard practice, it can get confusing when different Ajax requests get different styles of messages returned.

Additionally, making things more complex, sometimes sending a single raw value back doesn’t require an XML wrapper. For the sake of speeding up the request, the value is sent as text. This mucks with the paradigm I mentioned before.


XML is nice for documents. It’s a pain to work with as a data exchange format. Text is simpler, but can be inconsistent, often not providing enough useful data. That is where JSON picks up.

JSON, short for JavaScript Object Notation, is exactly what it says it is. It is the syntax used to create objects in JavaScript (and apparently Python, too). If you are curious, it would look like this in Javascript:

my_object = {"item1": "value", "item2": "value"};
document.write(my_object["item1"]); // Outputs "value"

You can nest objects inside objects, too.

Using JSON Instead of XML

Outputting XML from PHP is straight forward. JSON is similarly straight forward. The PHP script simply dumps out text similar to the object literal notation shown above (without the variable assignment). The request will be stored in the .responseText

When an XMLHttpRequest is sent to the server and XML is received, the XML is automagically parsed into a variable as a DOM object. JSON requires an extra step to assign the JSON object to a variable that you can use in your script.

my_object =  eval("("+request_object.responseText+")");

It’s pretty easy. If integrated into a class, the eval could be done on-the-fly.

Freddy — I mean, XML vs JSON

I’m still a JSON newb. I’ve done a pretty good deal of Ajax work, though. I’ve seen some very irritating stuff. For example, when fetching a few hundred records in XML format, the file can get large and cause the browser to freeze while the XML is parsed. Since JSON would be no different from executing Javascript code, it should avoid the crawl.

Unless XML tags are really light weight (which defeats the whole metadata concept of XML), the files can get really big. JSON doesn’t need as much tagging, so file sizes can be smaller.

XML is pretty simple to write. JSON is pretty simple to write. XML must be well-formed. JSON must be syntactically correct. JSON just happens to require less typing than XML.

I’m throwing this bonus in, even though it isn’t specific to Javascript: Not all languages have nice support for XML. PHP 4, for example, will do XML; it is just really hard. There are quite a few JSON interpreters for various languages and JavaScript and Python have it built-in. With XML, one has to learn how to use the XML library. With JSON, all one needs to know is how to work with normal objects in that language. Specifically with Javascript, you don’t need to learn DOM scripting to work with JSON. You just need to know how to use associative arrays.

Real World Testing

I did a simple test. I wrote a PHP file that creates an XML file with 300 values and a JSON file with 300 values. I later modified the loop to 1000 values to see if an article I read was true about Ajax breaking down exponentially. It’s basically true, by the way.

To be fair, I had already written the code, then modified it to run on JSON. For that to work, I had to make the JSON output match what would show after my XML file was turned into an object. The XML file was already optimized to be very small. So, while that might give some size overhead to my JSON (since it had to mimmic nodes), it also gives some parsing overhead to my Ajax (since it was converted to an object before use). What I mean is that the code I used could be more efficient (and I might re-write it eventually).

The Javascript reads the XML or JSON file. It uses the information to populate a select. I uploaded it to a server across the country from where I am and I ran the files. The server gets spikes of heavy use, so the results may be skewed.

At 300 records, I couldn’t really tell a difference between the JSON and the AJAX. Early tests at 1000 records, it was debatable. JSON was at least as fast as XML (usually around 7 seconds from refresh until was complete). Some requests were far faster on JSON (2 seconds instead of 7). Later tests revealed a much larger variation, with XML taking 40 seconds and JSON taking only 15.

The Golden Fleece

I like JSON for what it is. It’s very capable and much lighter than XML, both in terms of parsing and in terms of file size. It is a capable alternative to XML and has quite a few undeniable upsides. I’ll probably do further testing before rewriting all my legacy code, but I will be using JSON in the future.