Archive for the ‘Web Development’ Category

WYMEditor HTML Textarea Loses Focus

Thursday, January 24th, 2008

I’ve been using the WYMEditor on a little content management system I’ve been building at work. I was having a problem where I couldn’t edit in the HTML editor. Every time I clicked, the HTML textarea in WYMEditor would lose focus.

It turns out the problem was the way I used my label for the textarea that turns into the WYMEditor. There are two ways to use label. One is to tie the element to a form element via for="element-id" where element-id is the id of the form element. The other is to simply encapsulate the form element inside the label. I was doing the latter since I had written CSS to display forms.

labels focus the corresponding element when clicked (or when elements inside them are clicked). If you click the element directly, the focus doesn’t change. What I was failing to realize was that WYMEditor was putting the replacement / overlay inside the label. The visual editor itself uses an iframe, which overrides the label. The HTML Editor portion uses another textarea. In FireFox, when I clicked the textarea to edit, the focus would shift to the hidden (and first in the code) textarea.

The solution was to use the for method instead of the encapsulation method for WYMEditors and modify my CSS.

iPod Touch Pushing the Mobile Web

Friday, September 14th, 2007

As most know, Apple released a new line up of iPods, and possibly a major improvement for the mobile web.

The iPod Touch is essentially the iPhone without the cellular radios. It has the same WiFi, runs the same operating system, and has many of the same programs. Most important is Safari Mobile.

A major stumbling block to the mobile web is the crappy and disparate browsers. Most don’t do very well with HTML, or, worse, only work with WAP. Programs like Opera Mini and the S60 Browser from Nokia have made great inroads for the mobile web. The release of the iPhone improve user interaction with the web in the mobile space.

Now the iPod touch will bring Safari Mobile to a more ubiquitous space. In a comment on The good, the bad, and the ugly – iPhone edition, I claimed that Safari would never be another Internet Explorer. I stand firm on that, as Safari Mobile is still niche, but it stands to be a much bigger play (and possibly nuisance if these iPhone-only sites perpetuate).

Whatever the case, the mobile web has no become more ubiquitous. I know for certain that I’d pull out my iPod Touch (if I had one) to browse the web with long before I’d reach for my mobile phone. The only drawback is the lack of a cellular radio. If Apple could work out some 3G radio for the iPod Touch to deliver data only and provide it with a very cheap contract, the mobile web landscape would change drastically.

PHP Documentation Generators

Thursday, February 15th, 2007

I’ve been looking at documentation generators recently. Specifically, I’m playing with a new website that I want to do Right. Here are some notes for anyone looking at PHP documentation generators.

Languages

I needed a documentation generator primarily for PHP. However, I thought it’d be nice if it’d create documentation for JavaScript as well.

Possible Options

  1. PHPDocumentor

    Formerly called PHPDoc, PHPDocumentor was my first choice because I’ve heard of it before. It’s based on JavaDoc, and is conveniently packaged in PEAR. The syntax is a little weird at first, and documenting blocks of code inline doesn’t seem to be possible. However, it figures out a lot of useful stuff. PHPDocumentor, of course, doesn’t work with JavaScript. Despite the lacking feature, this is document generator I implemented.

  2. HeaderDoc

    HeaderDoc is Apple’s documentation system. It works with both PHP and JavaScript (as well as several other languages). I browsed the documentation, but haven’t bothered downloading the PERL code to compile it. I skipped on it because it seems far too robust (that is, bloated) for what I need and lacked some of the niceties of PHPDocumentor (e.g. detecting function names).

  3. Natural Docs

    I was really excited about Natural Docs when I first heard about it. The syntax seemed easy and it was supposed to understand inheritance, which would be a boon for JavaScript. Unfortunately, JavaScript and PHP are only supported insofar as they can be read in the document. No nifty features like they have with ActionScript, PERL, and C#. Too bad.

  4. Others

    I briefly looked at a few others that supported PHP and JavaScript. RoboDoc has fugly syntax and TwinText is supposedly Windows only (assuming it existed). Neither were going to be right for me.

PHPDocumentor in Use

Install

As I mentioned, PHPDocumentor is a PEAR package. That made the install very easy. I actually overcomplicated it quite a bit because of a few errors I had with the script. For future reference, you want to run pear install PHPDocumentor-beta since PHPDoc is depreciated. One will override the other.

As I said, when I tried to run phpdoc, I got an error about a file missing. Mac OS X apparently stores files in different places than the package expects. So, I had to edit the package and add chdir("/usr/lib/php/"); so that include("PhpDocumentor/phpDocumentor/phpdoc.inc"); would be run in the proper directory.

Since I wanted to have my documentation as a sub directory of my site, I had to write a shell script to clean out my documentation folder then run the PHPDocumentor command. Otherwise, PHPDocumentor would attempt to document the documentation.

Learning

I was a little bewildered by the syntax at first. This is mainly because it’s really hard to find solid examples of how to document code. The manual provides a little bit of instruction, but lacks real world examples. Luckily, I found a good example in the PEAR coding standards documentation. It wasn’t long until the syntax was second nature.

The only real gripe I have is that I’d like to be able to make inline comments about specific blocks of code. For example, a switch. It turns out that you have to do that within the function discussion. You can print code blocks in the discussion area, however. So, it’s almost as though you can do inline documentation.

I also should note that @var can only be attached to a class variable declaration, not any variable at all.

Output

I struggled with the output. I hate frames. So, I used the Smarty layouts for output at first. Eventually, I switched to the default frames layout because it handled some of the output better (e.g. unordered lists in discussions). Otherwise, it makes nice legible documentation that can be read on a web browser. It can also output PDF, and Windows Help files.

Final Opinions

PHPDocumentor is a pretty nice tool. Not only does it create documentation, it introduces a standard for commenting that is very legible. While I wish I cold have found a JavaScript and PHP document generator that I was happy with, PHPDocumentor does its job well.

How To Do Modern Web Design

Friday, January 12th, 2007

I’ve been trying to figure out the best way to explain how to do modern, semantic, standards compliant web design. I’ve been trying to make the point to my current protege that one should first look at a website like a term paper and move on from there. I’ve formulated a method now that I’d like to outline.

I once described how I see web design as a left brain plus right brain task. However, a new A List Apart article called Grok Web Standards breaks it down even more, suggesting that a web designer must think like a writer, artist, and engineer. The article inspired me to write the steps I think web designers should take to making a site.

I must say now that I seriously doubt anyone will ever work this way. It relies on the idea that content is given to the designer first. Rarely, if ever, will a client give you content without seeing a design. This is unfortunate because the goal of a website is to communicate a point. Looking pretty is the icing. You won’t be able to bake a cake without letting the client taste the icing first until you have a very good reputation. However, I hope my steps are written well enough that, if need be, designs can be done first.

How To Do Modern Web Design

  1. Understand Your Content

    Content is king. Before you touch an editor, read your content and understand it. Once you understand what you want to present, it becomes easier to mark up and will give you ideas on how you want to present the content to the user. Make a mental outline of what is in the content to help make decisions. To be clear, content does not just mean the text on the page. Content includes navigation and footer information as well as regular page content and images.

  2. Format Your Content

    The first goal of a site (web apps excluded) should be to present a message in a clear manner. The best way to do this is to make sure your markup is structured and semantic. I like to think of it as manually marking up a term paper or essay. Term papers must be clearly arranged and defined in logical sections. Lists should be presented as lists, paragraphs as paragraphs, and headings as headings.

    Every designer should have some boilerplate to work with. That is, having a very simple HTML template with basic elements (e.g. DOCTYPE, head, title, body, and any other basic elements you usually work with) will save you a few minutes up front and allow you to get down to business. Smultron, for example, has this built in. If you don’t, write a basic, empty HTML document. Don’t include stylesheets yet.

    To qualify the previous paragraph, boilerplate is only really needed if you want to validate as you go. If you trust your ability to write valid disembodied markup, just bang out the markup in a text editor without the rest of the code. It’s how I’m writing this post since it’s going to be injected into a design that has the rest of the code. In the end, you might save some time by not using boilerplate (e.g. if you have tons of pages that will need extra elements inserted during step four below). For the beginners, though, understanding the document structure will be more useful than the time saved by skipping out on it.

    Your task is to turn your content into something logical, legible, and semantic. Use your understanding of the content to create a document. Based on your mental outline, use headings with the correct level (typically the document title should be the first level heading, for example), lists, paragraphs, blockquotes, tables for tabulated data. Give emphasis, citations, and hyperlinks. Front load your lists if you need to. Don’t use divisions, fonts, or styles. Do rely on the browser’s built in formatting. Default styles are well thought out by the browser maker. You just want to make sure your content makes sense. Yes, it looks boring, but it also looks clean. When you view your document in a browser with no stylesheets, it should make sense when you try to read it.

  3. Design For Your Content

    By now, you should be pretty intimate with the content. You should have a clear idea about what the content is about. This will allow you to create a design that helps present that content in a truthful manner. Take what you know about your content and mockup something that can present it well. I suggest using Photoshop to work on mockups at this point. This will keep you from having to write a big stylesheet when you don’t know if you need it.

    Also, focus on usability. People that come to this site have no knowledge of how you intended them to use it. They have certain knowledge about the way the web usually works. It’s best to take that into consideration. If you break from tradition, you need to make it obvious how you intend the user to interact with the site.

    After you are certain that the content is accurately represented and the design is usable, then worry about aesthetics.

    Typically, clients will want to approve a design. So, make a good composite to show off. You may even want to do a little HTML and drop a full-sized composite into a browser so the client can get a feel of how it would look.

    If it turns out that you have to do this step first, you should have an idea about what sort of content the client wants to be on his site. You may have to use your imagination a little more. Don’t worry, though. This is where most people have to start, and I believe starting here can still produce a great site, even if it isn’t the most ideal place to start.

  4. Start Merging Content and Design

    Open your basic document in FireFox, Opera, or Safari. Since these browsers basically agree with one another, this will help you start with a standards compliant site. Now is when you want to start working on your stylesheets.

    Start coding your stylesheet based on what you already have. Before, people misused tables to create designs. Using web standards, people tend to get div diarrhea. Instead of using elements that exist, people put divs around everything whether they need it or not. So, I suggest making creative use of headings, navigation lists, and paragraphs to trim down your stylesheet as much as possible.

    The best place to start is by throwing out some of the default formatting. I know I just said it was well thought out. It still is. It just isn’t well thought out for your design.

    The first part of my main stylesheet clears margins, paddings, and borders (i.e. *{margin:0px; border:0px; padding:0px;}). From there, it’s the designers job to figure out the best proximity. It also forces the designer to take a better look at default behavior and hopefully start coding it out (or coding it in) to have well thought out alignment and contrast, which makes cross-browser compatibility a little easier to achieve.

    As you go, you will probably find that you need a div here and there to help represent a certain aspect of your design. For example, I like my designs centered on the page. In HTML 4, the best way to do this is a div that has margin: auto wrapped around all the content (though in real XHTML, the html and body elements can be used to achieve the same ends with no extra markup).

    So, add extra divs when you absolutely need them, and make sure to either code in or code out default styles. Be creative, though. It’s your job. Usually the easiest solution is not the best.

  5. Make Sure It Is Accessible

    A pretty design is useless if it isn’t a good design. Good designs are accessible to the most number of people possible. So, audit your site and make sure there aren’t any serious accessibility problems. Make sure images have meaningful alt tags, form elements have labels, etc. Don’t forget niceties like skip to content links. This section really deserves an article of its own. So, I won’t go into much detail.

    Since I’m on a Mac, I’ve familiarized myself with browsing web sites with VoiceOver. While it isn’t as good as the other web screen readers, it helps me get an understanding of what the page might be like. I highly suggest this (or some similar testing) be part of the audit if you have access.

  6. Debug for Internet Explorer

    Internet Explorer 7 was supposed to bring in a new age of standards compatibility. It didn’t. Even if it did, use of Internet Explorer 6 is still rampant. In the old days, stylesheet hacks were the way to account for differences in browser rendering (fighting bugs with bugs). Now, though, Internet Explorer has a proprietary concept called conditional comments.

    This is the first proprietary HTML concept Microsoft has created that is worth a damn. Conditional comments allows the designer to hide or show page content in Internet Explorer based on variables such as browser name or browser version. So, we create a conditional comment to show our Internet Explorer bug fix stylesheet only to Internet Explorer (e.g. <!--[if lte IE 6.0]> <link rel="stylesheet" type="text/css" href="ie_fixes.css"> <![endif]-->). No hacks needed.

    Work in Internet Explorer to clear up any layout issues by overwriting your real stylesheet values. So, if you had a fieldset{border: 1px dotted gray;} in your real styesheet, which causes weird results in Internet Explorer, you can change it by adding fieldset{border: 1px solid gray;} in your Internet Explorer stylesheet. Don’t get bent out of shape if things aren’t one-for-one the same. Sometimes a reasonable likeness is as good as you can get.

  7. Show It To The Client

    By now you have a semantic document custom tailored to your client’s content that is usable, accessible, and hopefully aesthetically pleasing. You can now show it to the client. Since you had the forethought to use web standards, you can make changes to every page at once via the few stylesheets you created. It will save you time and your client money. Since the code is semantic, search engines can better understand the content and index it properly, which may lead to increased revenue for your client. Everyone wins.

The Great MIME-Type Swindle

Sunday, November 19th, 2006

It’s a really old subject, but I haven’t said my piece on the XHTML 1.0 versus HTML 4.01 debate. While commenting on Roger Johansson’s blog, 456 Berea Street, I said a little bit about what I think. I figured I ought to go ahead and say my fill.

XHTML was supposed to be the death of HTML. HTML 4.01, until recently, was supposed to be the last iteration of HTML. I think XHTML is great. It allows the designer to implement bits of XML, should he want to. It also is very strict, requiring proper syntax where HTML didn’t. If there is something wrong with the code, the page should not render at all. This makes good syntax coding a requirement rather than a suggestion. However, the current implementation of most of the XHTML pages I know of is ideologically broken. The rest don’t work on Internet Explorer.

The comments I made assumed a little foreknowledge but basically say what I want to say here. However, I’ll go ahead and lay out my full argument here.

In 2000 when XHTML 1.0 was introduced, there was a need for backward compatibility since most browsers could not render real XHTML. That is, browsers were built to render HTML served as text/html. Sending XHTML as XML resulted in, I assume, an interpreted XML tree. So that adoption of XHTML would occur, it needed to work in browsers. So, the XHTML recommendation had guidelines in place of how to make XHTML work on HTML browsers. This wasn’t backward compatibility so much as a hack. It didn’t allow for any of the benefits of XHTML, though it succeeded in making pages written with XHTML syntax render on old browsers. So, the mime-type is the modern equivalent of DOCTYPE triggering quirksmode (which the Web Hypertext Application Technology Work Group embrace).

As far as hacks go, it worked. Like all transitional solutions, it was supposed to be dropped as browser support for XHTML grew. The problem is that Internet Explorer never supported it (even in version 7) and people opted to continue to send HTML-XHTML. Further, it seems, many people don’t realize that real XHTML requires the correct application/xhtml+xml mime-type being sent. This is something that must be set up on the server, as text/html is the default mime-type for sending .html documents on every web server I know of.

When XHTML is sent as text/html, it behaves differently than if it is sent correctly. XHTML sent as HTML is treated as tag soup, which means any optimized, light weight XML parsers aren’t used. Tag Soup XHTML doesn’t require strict use of CDATA elements and improper syntax doesn’t stop rendering of the page. The null closing tags are treated as broken attributes. While it still works, XHTML is ideologically broken if it is sent as text/html. People who suggest we ought to use XHTML to guarantee that fledgling designers pick up XHTML (which would require strict syntax) are suggesting that we use a broken implementation to uphold an ideal that is at odds with broken implementations (ignoring that an unforgiving markup language would cause most designers to give up out of irritation). It is hypocritical.

So, I see two choices that are ideologically sound. The first is to use XHTML and send it as application/xhtml+xml, Internet Explorer be damned. However, this is really not a good choice for most. Some would suggest content negotiation, but this is still a hack that requires a lot of extra thinking and planning (albeit a better one than sending XHTML as HTML). The second is to only use XHTML in specific instances where compatibility can be guaranteed.

Let me elaborate on the second choice. HTML 4.01 is a web standard. Anyone in the web standards group that always advocates XHTML over HTML on grounds that XHTML is better suited for use than HTML doesn’t understand what tools are for. HTML with a strict DOCTYPE can be validated, written semantically, and obsessed over as much as XHTML. HTML just allows the web designer to make the choice (and good designers will obsess over their markup no matter what).

HTML and XHTML are tools to solve a problem. Just like a screwdriver won’t help when a hammer is needed, XHTML is no good when HTML is needed. I’ll be specific. On pages where free-form user input is allowed, the potential for non-designers (and designers, too, for that matter) to enter bad markup is huge. In a real XHTML page, the user could easily break a page, preventing rendering. When using HTML, the page may no longer validate, but it will still render. In instances such as these HTML is a better tool than XHTML.

Web applications, however, are a different story. System requirements can be specified as they are on traditional applications (e.g. a browser that supports XHTML can be required). This means that no hacks need to be used. Since web applications generally use form elements to display and edit data, concerns over user input are drastically reduced. Most of the time, discreet data is required rather than free-flow data that one might find on, say, a comments page. So, data can be more accurately validated. When that data is inserted into an XHTML page, it’s far less likely to break the page. On a syntactical level, XHTML meshes well with programming languages. That is, the code must live up to certain standards. XHTML would help tie the front end to the back end. XHTML is a perfect tool for web applications.

So, ignoring all the common arguments about the dangers of using XHTML, the ideology of XHTML is broken if the page works in Internet Explorer 7 or below. Advocating the use of a broken technology is hypocritical when well written HTML 4.01 with a strict DOCTYPE is better suited for normal web usage. However, XHTML has a defined and useful place on the World Wide Web.

If you want another opinion, Maciej Stachowiak of Apple’s WebKit / Safari project weighs in with pretty much the same opinions I have.

Update: The Lachlan Hunt pointed out that I screwed up the XHTML MIME-type. I fixed the error.

Wikka Wiki Bread Crumbs

Friday, November 10th, 2006

Dan decided he wanted to set up a wiki to build content. I wasn’t involved in the process, but I got the job of making a bread crumbs script for the site.

Dan decided to use Wikka Wiki. When I asked why he didn’t use Media Wiki (the software that Wikipedia uses, which seems to be the de-facto standard), he said it required PHP 5, which our server doesn’t have. I guess that was a good-enough reason.

For all my niggles with Wikka Wiki, it is actually pretty standards compliant and has a cute name. The main issue with Wikka Wiki, which might be an issue with all wikis (I wouldn’t know since this is the first one I’ve ever coded on), is how it stores links from one page to another. It has a links table with a from_tag and to_tag. This is logical and works quite well until you try to do bread crumbs.

When you send a query to get every link that should go on a page, recursive links aren’t a problem. When you are trying to make a recursive function to get a bread crumb trail, it is quite annoying. Since Dan was in a hurry, he just did 10 left joins on the same table with a limit of one. That got the job done but didn’t always show a logical or short-route path to the current page. Dan decided that I was better at recursive functions and told me to work on it when I had time (except he calls them cookie crumbs instead of bread crumbs, which you’ll see in the code). After about 10 tries, I came up with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
function BuildCookieCrumbs2($page, $depth=0, $menu=array("CareerOpportunities","DefaultMenu","CurrentFrontPage")){
	if($depth &gt; 10)
		return;
	if(in_array($page,$menu)){
		return $page;
	}
	$sql = "SELECT to_tag, from_tag FROM career_opportunities_links WHERE from_tag IN ('".implode("','",$menu)."')";
	$r = $this-&gt;Query("SET NAMES UTF8");
	$r = $this-&gt;Query($sql);
	$o = array();
	$f = array();
	while($row = mysql_fetch_assoc($r)){
		$o[] = $row["to_tag"];
		$f[] = $row["from_tag"];
	}
	if($ret = array_search($page,$o)){
		return $f[$ret]."&gt;".$o[$ret];
	}
	else{
		$ret = "";
		$bcc = $this-&gt;BuildCookieCrumbs2($page,$depth+1,$o);
		$ret .= $bcc;
		$tmp = explode("&gt;",$ret);
		$sql = "SELECT FIND_IN_SET(from_tag,'".implode(",",$menu)."') as fis FROM career_opportunities_links WHERE from_tag IN ('".implode("','",$menu)."') AND to_tag = '".$tmp[0]."'";
		$r = $this-&gt;Query("SET NAMES UTF8");
		$r = $this-&gt;Query($sql);
		$row = mysql_fetch_assoc($r);
		$ret = $menu[$row["fis"]-1]."&gt;".$ret;
		if(in_array($page,$tmp)){
			return $ret;
		}
	}
	return $ret;
}
function BuildCookieCrumbs($page) {
	$ret = $this-&gt;BuildCookieCrumbs2($page);
	$ret = explode("&gt;",$ret);
	unset($ret[0]);
	return array_values($ret);
}

These two functions can be dropped into the Wakka.class.php file. I have no idea how to call it and put it into the template, as that part was already done when I started working on it. Sending the current page will return an array of tags to use in the bread crumb trail. You’ll probably want to change the default menu array in the BuildCookieCrumbs2, as it is set up to reflect our site.

This may not be the best solution, but it works pretty well.

Browser Tools for Web Designers

Tuesday, November 7th, 2006

I told Kathryn that I had a few browser tools for Macs she should check out for web design. I decided to make a post about it instead.

Make Safari Better

I really prefer to work in Safari on a Mac. Firefox, frankly, is slow as hell. The problem is that Safari, like most of Apple’s software, is simplified for general use. Out-of-the-box, it lacks a lot of features that even Internet Explorer have. The qualifier there is out-of-the-box.

Debugging JavaScript

While it doesn’t feel as robust as Firefox’s JavaScript debugger, Safari ships with one hidden away. One of the first things I do with a fresh install of Safari is turn on the Debug menu. To turn it on, open Terminal and enter defaults write com.apple.Safari IncludeDebugMenu 1 and press enter. The next time you launch Safari, the Debug menu will appear. Make sure Log JavaScript Exceptions is checked, then open the JavaScript Console from the Debug menu (or press command + shift + j).

The interface is very minimal. At first blush, it seems to be lacking compared to Firefox’s console. After you use it some, you’ll find that it is as good as Firefox when it comes to picking up, identifying, and finding errors in your code.

Safari Tidy

The other tool I find quite helpful is Safari Tidy. Safari Tidy is a plugin for Safari that runs every page you browse through HTML Tidy. HTML Tidy helps to find any errors in your HTML. While it isn’t a validator, it does a pretty good job of identifying problems.

Safari Tidy puts a simple message in the bottom right side of the status bar in Safari. It may say 0 errors / 2 warnings. This tells the developer that there are three potential problems in the code. Double clicking the message will bring up a modified view source window that displays the errors. By double clicking an error in the view source window, the line with the problem is highlighted. Identifying errors is much quicker this way.

Make Firefox Better

Firefox is already a great browser for developers. The JavaScript debugger is top notch and it has a built-in DOM inspector. However, I’ve found one extension that I can’t live without: The Web Developer Toolbar.

The Web Development Toolbar

This toolbar adds a lot of functionality by allowing web designers to see tons of information about what is in the HTML. It has too many features to list all of them, but I’ll list the ones I use the most (which aren’t necessarily the most helpful).

  1. JavaScript Error Notification

    In the far left of the tool bar, there is a small info icon. When a JavaScript error is encountered, it turns into a red icon. Clicking it will show the JavaScript Error Console. This at-a-glance error notification is the feature I use the most.

  2. Validate Local HTML

    Under the tools menu, there is an option to run the current page through the World Wide Web Consortium’s validator as a local file. The source is submitted as a file, which allows any page to be validated. To boot, they have assigned a key combo to do this quickly in a new tab. There is also options to validate local style sheets, as well as regular validation options.

  3. Resize Window

    This sounds pretty silly, but there is no way to accurately resize a browser window without JavaScript or a tool bar. This one includes resizing to 800×600 as well as custom values.

There are tons of other features packed into this tool bar. The features I use most don’t accurately represent how powerful it is.

FireBug

FireBug is a great extension that is very useful in certain situations. While I don’t use it every day, FireBug is very powerful. It exposes the HTML on the page at a given time. That means any changes to the DOM are accounted for in the source view.

What makes FireBug great is that you can then live edit the HTML and CSS. The browser updates as you type. FireBug won’t let you save your work, but it does allow for rapid debugging without impacting the actual files. Basically, you can experiment with various ideas to debug your layout without having to deal with backups or limited undos. Once you figure out your issue, you can copy and paste the updates to live files.

Got Any Suggestions?

Do you have any browser add-ons for web designers or developers that you can’t live without? Let me know!

W3C Listens, Incremental Update to HTML On The Way

Saturday, October 28th, 2006

Surprisingly, SlashDot scooped all the web design websites I normally read on Tim Berners-Lee’s announcement that HTML will be incrementally updated (as well as things such as the W3C‘s HTML validator)

In the post, Berners-Lee addresses folks like (in order of impact) Bjoern Hoehrmann, Jeffery Zeldman, Eric Meyer, and Molly Holzschlag who have been angry with the W3C for resting on their laurels, instead of listening to the community (apparently blatantly ignoring them). As everyone points out, the W3C should be advancing the web. Instead, small work groups are bringing us things like microformats and recommendations to update HTML.

So, the really important part of Berners-Lee’s post is not that HTML will be updated. It is that the W3C is claiming that it will now listen to the community by actually following the feedback mechanisms they have in place instead of blatantly ignoring them. The W3C is going to get back into the community instead of towering above it. Assuming they follow through, this could be a new age of progression.

I might have more to say after the weekend when the web design celebrities weigh in on the subject.

Ajax vs Specific Accessibility vs General Accessibility

Monday, October 23rd, 2006

I was reading Rob Cherny’s article Accessible Ajax, A Basic Hijax Example and started thinking a little more about accessibility. Cherney claims that this hijax method, using unobtrusive JavaScript to make a form submit with Ajax instead of traditional POSTing when Ajax is available, is more accessible. While I think it is more accessible than only using Ajax, it is only more accessible for generic alternative browsers; it isn’t any more accessible for disabled people.

Apparently the term hijax that was used in Cherny’s article was coined by a guy named Jeremy Keith. I don’t think the term was really needed as the concept of unobtrusive JavaScript pretty much sums up the idea of hijax. I’ve inadvertently been using hijax since May, however. So, I have some opinions on it.

Web accessibility is a big beast with two opposing heads. The original term accessible meant that the site was designed such that people with disabilities could use the site. Recently, the term has changed to include people with or without disabilities that browse from alternative devices. By alternative I mean mobile phones, ultra-mobile PCs, screen readers, and the like. These alternatives often lack the typical mouse-based interface of desktops, JavaScript support, cascading stylesheet support, or even HTML support. So, one head of the beast is accessible-for-disabled-persons and the other is accessible-for-alternate-devices.

I personally prefer the inclusiveness of the second head when I talk about accessibility. However, the caveat is that saying something is accessible to some facet of alternative browsers may not make it accessible to the disabled. If you aren’t specific about what you are trying to be accessible to, you end up confusing people (see my comment).

Cherny’s article elegantly addresses a facet of the second head. The form degrades gracefully. Unobtrusive JavaScript is designed this way: create a normal web page, then spice it up with JavaScript to improve the existing functionality. So, for Ajax forms, it is a three step process that might go as follows:

  1. Create a form that works by POST.
  2. If JavaScript is available, unobtrusively add support for validation.
  3. If Ajax is available, unobtrusively add support for Ajax.

If there is no JavaScript, the form will post normally. If there is no Ajax, the user gets error correction and the page posts normally. If there is Ajax, the user gets error correction and doesn’t have to wait for a page load. Whatever the case, the form is submitted and no alternative browser is left out.

However, there are still problems with Ajax and the first head of accessibility. I’m going to focus on screenreaders, as people with motor disabilities or deafness still have random access to the page (they aren’t limited to linearly reading the page). I will say this, though, in reference to the deaf and those with motor disabilities: the form itself must be accessible, using labels and accesskeys, or it’s still not fully accessible. Screenreaders, at this point, are still no good with dynamic page updates, which are a mainstay of Ajax. Some aren’t even good with alerts. Of all the available solutions to the problem of updating content dynamically, none of them work across the board. The only way to accessibly do dynamic content updates is to give the user another option.

Since I was aware of this problem when I redesigned my site, I built in a link to turn off Ajax above every form that uses Ajax. The Ajax is unobtrusive, or hijax if you prefer. This was the only method I could think of to allow full access to my forms. Until screenreaders catch up to the technology, the best Ajax accessibility may be no Ajax at all. So, let the user opt out if he wants.

Web Standards Still Matter

Wednesday, September 20th, 2006

I want to do a podcast. If you are on my site and you look at the navigation bar, you’ll see a heading for it. I just haven’t had the time yet, and I’ve been questioning my original intent. Now, I don’t question it as much.

When I start thinking about how I need a head set to record the podcasts, I get all antsy. I want to make it, but I still need to come up with several weeks worth of topics. I know it’s going to be a web design podcast for hobbyists, or something like that. I couldn’t decide, however, if I should try to keep up with the bleeding edge concepts, like microformats, or continue to advocate old favorites web standards, accessibility, unobtrusive JavaScript, and the rest.

As I was walking to work Monday, I decided that there was no reason not to make a podcast that dealt with the old favorites. Web standards (and the rest) are still not widely used. I shouldn’t be dissuaded by the fact that, among the web design intelligentsia, web standards is becoming a less of a topic of discussion and more a way of life. The people that I hear that talk about this sort of thing are already living it and have moved on to the next big thing. Someone (other than the Web Standards Project, I guess) needs to keep talking about it for the newbies and late adopters.

Again, today, I was pondering this on the way to work. After sitting down at my desk and opening one of my news feeds, I saw two articles that pressed the fact people need to keep talking about web standards. Why Standards Still Matter, which appeared on Vitamin, was written by Roger Johansson (and plugged on his website) offered some helpful hints on how to keep the discussion flowing.

Robert Nyman also threw in a few words that, once again, summed up the need to use and advocate standards.

These two articles have reinforced my thoughts on where I want to head with my podcast and are worth a read if you care anything about web design.