Archive for the ‘Web Design’ Category

Ajax vs Specific Accessibility vs General Accessibility

Monday, October 23rd, 2006

I was reading Rob Cherny’s article Accessible Ajax, A Basic Hijax Example and started thinking a little more about accessibility. Cherney claims that this hijax method, using unobtrusive JavaScript to make a form submit with Ajax instead of traditional POSTing when Ajax is available, is more accessible. While I think it is more accessible than only using Ajax, it is only more accessible for generic alternative browsers; it isn’t any more accessible for disabled people.

Apparently the term hijax that was used in Cherny’s article was coined by a guy named Jeremy Keith. I don’t think the term was really needed as the concept of unobtrusive JavaScript pretty much sums up the idea of hijax. I’ve inadvertently been using hijax since May, however. So, I have some opinions on it.

Web accessibility is a big beast with two opposing heads. The original term accessible meant that the site was designed such that people with disabilities could use the site. Recently, the term has changed to include people with or without disabilities that browse from alternative devices. By alternative I mean mobile phones, ultra-mobile PCs, screen readers, and the like. These alternatives often lack the typical mouse-based interface of desktops, JavaScript support, cascading stylesheet support, or even HTML support. So, one head of the beast is accessible-for-disabled-persons and the other is accessible-for-alternate-devices.

I personally prefer the inclusiveness of the second head when I talk about accessibility. However, the caveat is that saying something is accessible to some facet of alternative browsers may not make it accessible to the disabled. If you aren’t specific about what you are trying to be accessible to, you end up confusing people (see my comment).

Cherny’s article elegantly addresses a facet of the second head. The form degrades gracefully. Unobtrusive JavaScript is designed this way: create a normal web page, then spice it up with JavaScript to improve the existing functionality. So, for Ajax forms, it is a three step process that might go as follows:

  1. Create a form that works by POST.
  2. If JavaScript is available, unobtrusively add support for validation.
  3. If Ajax is available, unobtrusively add support for Ajax.

If there is no JavaScript, the form will post normally. If there is no Ajax, the user gets error correction and the page posts normally. If there is Ajax, the user gets error correction and doesn’t have to wait for a page load. Whatever the case, the form is submitted and no alternative browser is left out.

However, there are still problems with Ajax and the first head of accessibility. I’m going to focus on screenreaders, as people with motor disabilities or deafness still have random access to the page (they aren’t limited to linearly reading the page). I will say this, though, in reference to the deaf and those with motor disabilities: the form itself must be accessible, using labels and accesskeys, or it’s still not fully accessible. Screenreaders, at this point, are still no good with dynamic page updates, which are a mainstay of Ajax. Some aren’t even good with alerts. Of all the available solutions to the problem of updating content dynamically, none of them work across the board. The only way to accessibly do dynamic content updates is to give the user another option.

Since I was aware of this problem when I redesigned my site, I built in a link to turn off Ajax above every form that uses Ajax. The Ajax is unobtrusive, or hijax if you prefer. This was the only method I could think of to allow full access to my forms. Until screenreaders catch up to the technology, the best Ajax accessibility may be no Ajax at all. So, let the user opt out if he wants.

Internet Explorer 7

Thursday, October 19th, 2006

Internet Explorer 7 has been released. Download it now! Help usher in a new era: an era where Microsoft is actually interested in web standards.

Update: There is some discussion going on over at 456 Berea Street. Particularly of interest is how to install multiple stand alone versions of Internet Explorer. While this info can be gathered elsewhere, the feed back from the comments is useful in deciding which method to use. Plus, these are opinions from web designers rather than tech pundits. So, that is nice.

Update: Apparently someone already found a vulnerability. Some things never change.

Web Standards Still Matter

Wednesday, September 20th, 2006

I want to do a podcast. If you are on my site and you look at the navigation bar, you’ll see a heading for it. I just haven’t had the time yet, and I’ve been questioning my original intent. Now, I don’t question it as much.

When I start thinking about how I need a head set to record the podcasts, I get all antsy. I want to make it, but I still need to come up with several weeks worth of topics. I know it’s going to be a web design podcast for hobbyists, or something like that. I couldn’t decide, however, if I should try to keep up with the bleeding edge concepts, like microformats, or continue to advocate old favorites web standards, accessibility, unobtrusive JavaScript, and the rest.

As I was walking to work Monday, I decided that there was no reason not to make a podcast that dealt with the old favorites. Web standards (and the rest) are still not widely used. I shouldn’t be dissuaded by the fact that, among the web design intelligentsia, web standards is becoming a less of a topic of discussion and more a way of life. The people that I hear that talk about this sort of thing are already living it and have moved on to the next big thing. Someone (other than the Web Standards Project, I guess) needs to keep talking about it for the newbies and late adopters.

Again, today, I was pondering this on the way to work. After sitting down at my desk and opening one of my news feeds, I saw two articles that pressed the fact people need to keep talking about web standards. Why Standards Still Matter, which appeared on Vitamin, was written by Roger Johansson (and plugged on his website) offered some helpful hints on how to keep the discussion flowing.

Robert Nyman also threw in a few words that, once again, summed up the need to use and advocate standards.

These two articles have reinforced my thoughts on where I want to head with my podcast and are worth a read if you care anything about web design.

Internet Explorer 7 and Automatic Update

Thursday, July 27th, 2006

So, maybe I missed the scoop. According to IEBlog, it turns out Internet Explorer 7, commonly referred to as IE7, will be released as a high-priority update via Automatic Installation. Or will it?


It had long been rumored that IE7 would be released via Automatic Updates. However, an interview with Chris Wilson posted on Think Vitamin on the 24th of July, 2006, the question Is IE gonna update to IE7 was answered as follows:

Is IE gonna update to IE7? So, I think the first thing, really, is that we can’t really force it on users… I mean, that is not our goal. We really like to offer users choice. It is a different user interface. Some people will be really jarred by that. I think that we certainly want to encourage everyone out there to… I do believe that we will offer it through Windows Update but it won’t be an automatic silent update. Certainly, it won’t be like one day you come in and suddenly your computer is running IE7 rather than IE6. Certainly, we have to ask the user if they really want it. As nice as it would be to blast it on to everybody’s system, I don’t think that can happen.

Chris Wilson is group program manager for Internet Explorer Platform and Security at Microsoft. He’s someone that I would suspect to be in the know. According to IEBlog, something changed in two days. IEBlog says the following:

To help our customers become more secure and up-to-date, we will distribute IE7 as a high-priority update via Automatic Updates (AU) shortly after the final version is released for Windows XP, planned for the fourth quarter of this year.

This has caused some confusion. The IEBlog post seems to suggest it was a recent change to their distribution plan. I’d hazard a guess that the change was in the making before the Chris Wilson interview but only trickled out from the official blog July 26th.

The announcement seems to suggest a delivery method that is a hybrid of the install-if-you-want it method Chris Wilson talks about and the silent-update he shuns. The update will download via Automatic Updates. The install won’t be silent. The user will be prompted whether the IE7 install should be ignored completely, put off until later, or installed. Also, the post claims that IE7 will be able to roll back to IE6.

So, it looks like the Internet Explorer Team has managed to snuggle up in the middle.

Some Thoughts

I appreciate their user choice attitude, but I think now is a bad time to apply it. Internet Explorer is the largest source of installed spyware and adware. The fact that the interface might be a shock to users accustom to IE6 is really less important than protecting them from themselves. But, I really, really, really want IE7 to be adopted as fast as possible so I don’t have to worry so much about IE6 CSS hacks. I mean that I’m biased.

The other odd quirk is that upgrading to IE7 will allow users to preserve your current toolbars shortly after it lauded the browser’s advanced security features… including ActiveX Opt-in, the Phishing Filter and Fix My Settings. So, what IE7 will be doing is allowing users to keep all their spyware and adware infested browser cruft despite all the security enhancements the Internet Explorer Team created to prevent people from installing malware. Good thinking, guys. I understand leaving favorites, home page settings, and other settings, but the typically malware-sporting toolbars should be left out. If the user really cares about a tool bar, he’ll go find it again. Blame it on incompatibility. FireFox does it with extensions all the time.

Finally, I’m worried about the roll back to IE6 thing. I’ve stories about how big of a headache it is to uninstall the IE7 betas. I would think an uninstall for a beta product would be high-priority if a company wanted people to test it. Apparently, this is not the case for Microsoft. I hope they improved the uninstall substantially for the final release.

Examining John C. Dvorak’s Anti-CSS Article

Tuesday, July 18th, 2006

We all know it. John C. Dvorak is a troll. He’s not just a troll. He’s an uninformed troll. In Why CSS Bugs Me, an article posted on June 12th, 2006, at the PC Magazine website, John C. Dvorak explains why CSS is useless.

The base assumption of his article is that he actually knows what he is talking about. John C. Dvorak is not a designer. He’s a tech columnist. His blog site, which he claims to be redesigning, is proof enough of that. It’s pretty clear that his inability to style a website should mean his opinion is worthless. Nevertheless, I found the article linked from CSS Insider. That means someone cared about his opinion.

I know I’m in for a treat when the opening line of an article is ended by an question mark and an exclamation point. Dvorak claims that none of this stuff works. That statement is absolutely wrong. A great deal of it works. In fact, almost everything works as expected. If I do a font-family:verdana, helvetica, sans-serif on a div tag, every important graphical browser that has been released since I started using the web will render the text in one of the listed fonts. It’s only when using advanced CSS that problems arise.

What John seems to be upset about is the PC Platform. Frankly, every major browser for the Mac is pretty standards compliant. Opera was a little bit of a problem before Opera 9 was released, but even Opera 8 rendered most things the same way Firefox and Safari did. The problem lies on the PC where one of the major browsers, which happens to be the most popular, is not standards compliant. That browser is Internet Explorer if you’ve been living in a hole. Microsoft even prides itself on how well it embraced and extended standards (for example, filters). The major problem for most designers is that the Internet Explorer rendering engine does the box model wrong. But, really, with conditional comments, making a bit of CSS for Internet Explorer is a trivial task.

The point is that he is picking on CSS instead of picking on the people who write the CSS rendering engines. He should be bad-mouthing Microsoft, not CSS.

I take that back. For a moment, he did take a strike at the defining principle of CSS: it cascades.

The first problem is the idea of cascading. It means what it says: falling – as in falling apart. You set a parameter for a style element, and that setting falls to the next element unless you provide it with a different element definition. This sounds like a great idea until you try to deconstruct the sheet. You need a road map. One element cascades from here, another from there. One wrong change and all hell breaks loose. If your Internet connection happens to lose a bit of CSS data, you get a mess on your screen.

I’ll ignore the part about the connection losing a bit of CSS data because I think it’s an asinine suggestion. I can only think of one situation where this would happen and it wouldn’t totally mess up the design if the CSS were written properly. What I want to address is the fact that what Dvorak is bitching about is how CSS was designed to work. The beauty of it is that I can declare a font-family on the body and it should cascade down into paragraph tags. I don’t have to explicitly set the value every time. Further, plain-old HTML had some amount of inheritance built in (for example wrapping a bold tag in a font tag will cause whatever is in the bold tag to inherit the font information). The cascading nature isn’t broken; rather, Dvorak’s understanding of CSS or his ability to use it is under developed. If he’s having trouble figuring out where the style is attributed from, it is probably the fault of the author or his ability to read CSS, not the fault of CSS. Well written CSS, like C or PHP, is quite easy to read if the reader knows what he is reading and the writer wrote it well.

John closes his article by throwing punches at the W3C.

And what’s being done about it? Nothing! Another fine mess from the standards bodies.

First, it’s not the W3C’s fault that the browsers don’t implement the standard properly. That’s like blaming law makers when criminals commit crimes. If there were only Mozilla and Internet Explorer, I might concede that the standard is unclear. However, Opera, Safari, and Mozilla typically agree on how a page should look. So, I can’t blame it on the specification.

Second, something is being done about it. Initiatives like the Web Standards Project have been busting ass for years to get browser makers to stick to standards. The web standards movement is making major footholds. The only major graphical browser that isn’t standards compliant is Internet Explorer. The IE7 team is working hard to make Internet Explorer more standards complaint. Pretty soon, the differences between how browsers render pages will be almost nil. The only difference will be how soon the browser makers integrate new CSS features like those in CSS 3 draft that are already being built into browser releases.

It seems John got his panties in a wad over this one. For once, I think this was a real rant rather than a calculated troll to boost the number of page views on his articles. Unfortunately, he comes across as a whining web design newbie. Those of us that have been doing standards-based web design for awhile have dealt with far worse that he is and we still appreciate CSS for the excellent tool it is.

If you want to read the article, it can be found on on PC Magazine.

How To Make an Ajax Chat Room

Friday, June 23rd, 2006

I’ve been helping this guy from India with an Ajax / XML chat app. I wrote one ages ago, but I think it broke. So, since I’m already giving a how-to, I might as well write it down.

Ajax For Newbies

If you are familiar with XML, the buzzword Ajax, the XMLHttpRequest() object, and DOM scripting, skip to the juicy part. If not, keep on reading.

An XML Primer for Ajax

XML looks kind of like HTML, but it is quite different. For the sake of brevity, XML is a structured data file that draws from SGML. The difference is that it must be well formed and that the tags can be arbitrarily named (unless you specify a DTD).

Going into more detail would not be pragmatic since the JavaScript XML parsers only care about whether or not the XML is well-formed. So, here is an example XML file:

   Fred here... Just making a post!
   Hey, Fred.  Thanks for the post.

The good thing about good XML is that it has metadata built in. You can read it and know what is what. The code above is obviously a group of messages. Inside the messages group is a message. In this case, there are two of them. Inside each message is a from and a body. The from, of course, contains the name of the person who sent the message. It follows that the body contains the message body. This XML file will be used later.

XMLHttpRequest Primer for Ajax

XMLHttpRequest, at least on browsers that aren’t Internet Explorer 6 or less, is a native object in JavaScript that allows the client to send a request to the server in the background. That is, the browser doesn’t have to physically go to the page. This makes it easy to update parts of pages without iframe, frame and other hacks.

I’ll give you the condensed version of how to use it for the sake of brevity. I’m sort of copy / pasting code from Apple’s Developer documents because their example does XMLHttpRequest() the right way.

// This needs to be in the global scope so we can use it in other functions.
var xml_obj;
 We will be sending loadXMLDoc the url of the page we want to call,
 the query string to post to the url, and a function to call when
 the script updates.
function loadXMLDoc(url,query_string,callback) {
	xml_obj = false;
    // Try native XMLHttpRequest first.
    if(window.XMLHttpRequest) {
    	try {
			xml_obj = new XMLHttpRequest();
        } catch(e) {
			xml_obj = false;
    // If native fails, fall back to IE's ActiveX objects.
    } else if(window.ActiveXObject) {
       	try {
        	xml_obj = new ActiveXObject("Msxml2.XMLHTTP");
      	} catch(e) {
        	try {
          		xml_obj = new ActiveXObject("Microsoft.XMLHTTP");
        	} catch(e) {
          		xml_obj = false;
	if(xml_obj) {
		// Any time the state of the request changes, callback will be called.
		xml_obj.onreadystatechange = callback;
		// Set the parameters of the request.
		// POST can also be GET.  We use the URL from above.
		// The <q>true</q> tells whether the request should be asynchronous."POST", url, true);
		// Since we are POSTing, we send the query string as a header
		// instead of as a string at the end of the URL.
		alert("Failed to create XMLHttpRequest!");
This is an example callback function.
function messagesRetrieved(){
	if (xml_obj.readyState == 4){
		// When readyState is 4, the request is complete.
		if (xml_obj.status == 200){
			// When status is 200, we know there are no server errors or anything.
			// and we can process the data.
			alert("There was a problem retrieving the XML data:\n" + xml_obj.statusText);
// How we would call the loadXMLDoc

Hopefully, from the comments in the script, you can see what is going on. When the request starts, the xml_obj.readyState property will be update. Once the request is complete, there will be several properties available.

  • xml_obj.responseXML is the most important for Ajax, since that is where the data is stored as an object. Yes, JavaScript automagically parses the XML and puts it in an object for you.
  • xml_obj.responseText is nice because it will print out the text returned from the server. It’s great for debugging.
  • xml_obj.status is the server status code of the request. This should be 200 or something is wrong with your scripts.

What is Ajax About, Then?

As I said, the term Ajax is a buzzword. It’s a short way of saying, JavaScript using XMLHttpRequest to get XML from a server in the background.

There are other methods that are probably get called Ajax, but aren’t. For example, XMLHttpRequest allows the server to send back text instead of XML (Ajat?). Also, if the asynchronous flag is set to false, the request doesn’t run in the background. Instead the script that is running waits for a response from the server before it continues.

It just turns out XML is slightly more usable for complex stuff and that synchronous requests defeat the point. Though Ajat have many uses.

DOM Scripting

DOM scripting is the key to Ajax. Ajax is pretty useless if you can’t extract the data. Basically put, the DOM is HTML / XML represented in an object. DOM Scripting, then, is using JavaScript (or some other language, I suppose) to manipulate this object. If you manipulate the DOM of your HTML page, it will reflect visibly in the browser.

DOM scripting replaces old-school document.write coding. Instead, you insert nodes (aka tags) or remove nodes from the DOM, and thus the page. The great thing is that XML files can be represented as a DOM object. That means that you can manipulate the XML the same way you manipulate the HTML. You’ll make a lot of use of the following (where xml = xml_obj.responseXML):

  • xml.childNodes is a collection of the children of the current node. It’s handy to loop on: for(i = 0; i < xml.childNodes.length; i++){}
  • xml.nodeName is the string value of the current tag.
  • xml.nodeType is a numeric representation of what kind of node the current element is, text (3) or element (1). This is handy for conditionals: if(xml.nodeType == 3){alert(xml.nodeValue);}
  • xml.nodeValue is the string value of the current element if it is a text node.

There are more useful properties and methods, but, for now, we're going to loop through output. It isn't the most efficient method, but it is more agnostic than, say, xml.getElementsByTagName.

A Basic Ajax Chat

As we see above, we already have a good deal of code written. In fact, most of the tedious stuff is done. We just have to solve a few problems.

Server Side Scripting

Ajax isn't worth much without server side scripting. PHP is my weapon of choice, but ASP, Perl, Ruby, or any other language will work just fine. However, since I chose PHP, it will affect my data storage solution.

Data Storage

Most people use Ajax to deliver data stored on the server. All the usual methods are available including mySQL, text files, and XML. XML seems like the likely choice, since we are spitting out XML. However, at least with PHP, working with XML is hard. Further, doing queries on XML seems like it'd be severely difficult in PHP. Other languages might like XML better than PHP, though.

So, for the sake of this article, that leaves text files and mySQL. Normally, I'd go with mySQL but this is a simple article. So, we're just going to use text files. Opening files is easier than writing all the code to get the database running.

The Text Storage File

The text file needs to only hold two variables: the name of the person sending the message, and their message. We'll use a tab-separated file since it is unlikely that the user will enter a tab (because that typically changes fields) and we'll remove all carriage returns. So, the XML file mentioned at the beginning before will be stored like this:

Fred	Fred here... Just making a post!
Joey	Hey, Fred.  Thanks for the post.

Convert The Text File To XML

We'll need to write a function in PHP to convert the text file to XML for display in our chat application. It should be something like this:

function tsv_to_xml($file){
		$output = trim(file_get_contents($file));
		die("Couldn't open the file.");
	if($output != "")
		$output = explode("\n",$output);
		$output = array();
	$return = "<messages>";
	for($i=0; $i<count($output); $i  ){
		$output[$i] = explode("\t",$output[$i],2);
		$return .= "<message>";
		$return .= "<from>";
		$return .= "".htmlentities($output[$i][0])."";
		$return .= "</from>";
		$return .= "<body>";
		$return .= "".htmlentities($output[$i][1])."";
		$return .= "</body>";
		$return .= "</message>";
	$return .= "</messages>";
	return $return;

That should return the XML file specified at the beginning from the tab-separated file.

Adding a New Message

We also need to add to the text file. So, we'll write a PHP function for that, as well:

function tsv_add($file, $from, $body){
		$output = trim(file_get_contents($file));
		die("Couldn't open the file.");
	$from = preg_replace("/(\s|\r\n|\r|\n)/"," ",$from);
	$body = preg_replace("/(\s|\r\n|\r|\n)/"," ",$body);
	$tsv = $from."\t".$body;
	$my_file = fopen("$file","w");
	fwrite($my_file, trim($tsv)."\n".$output);
	return tsv_to_xml($file);

Basically, we create a tab-separated entry for our file (stripping out protected characters), dump it into a file and return the updated XML file.

Putting the PHP Together

It'd be a bummer to use several different PHP files when one would do. So, we'll use a script to manage it. We want to accept an action parameter to help our script decide what to do. It will look like this:

$file = "./data/chat.txt";
	case "get_xml":
		header("Content-type: text/xml");
		echo tsv_to_xml($file);
	case "add_post":
		header("Content-type: text/xml");
		echo tsv_add($file, $_POST["from"], $_POST["post"]);
		die("There is no action associated with that call.");

Once we have all that PHP in one file, it will be our request file. So, we will call it request.php and put it on our development server putting the tsv_add and tsv_to_xml functions at the top. Make sure the data folder is writable! Make sure you put an empty chat.txt file in the data folder!


The only thing we don't have is the HTML file (and a little bit of JavaScript) we'll use to display the XML from the chat. The code for the HTML should look something like this (except you'll probably want to put the CSS in a file and use the link tag):

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">
  <title>Ajax Chat</title>
  <style type="text/css">
   label{display:block; width:100%;}
   input{display:block; width:100%;}
  <script type="text/javascript" src="chat.js"></script>
 <body onload="get_xml();">
  <form method="post" action="#" name="send_form" id="send_form" onsubmit="send_xml(); return false;">
    <label for="uname" accesskey="n">Name</label>
    <input type="text" id="uname" name="uname">
    <label for="mbody" accesskey="m">Message</label>
    <input type="text" id="mbody" name="mbody">
    <input type="submit" id="submit" value="Send Message" accesskey="s">
  <div id="chat_area">Loading chat...</div>

The Rest of the JavaScript

We need just a little bit more JavaScript. The get_xml() function, the send_xml() function, and the messagesRetrieved() function (see above).

function messagesRetrieved(){
	if (xml_obj.readyState == 4){
		if (xml_obj.status == 200 &amp;&amp; xml_obj.responseXML){
			xml = xml_obj.responseXML;
			output = "";
			for(var i=0; i < xml.childNodes.length; i++){
				for(var j=0; j < xml.childNodes[i].childNodes.length; j++){
					if(xml.childNodes[i].childNodes[j].nodeType == 1){
						for(var k=0; k < xml.childNodes[i].childNodes[j].childNodes.length; k++){
							if(xml.childNodes[i].childNodes[j].childNodes[k].nodeName == "from"){
								output  = "<b>" + xml.childNodes[i].childNodes[j].childNodes[k].firstChild.nodeValue + "</b>: ";
							else if(xml.childNodes[i].childNodes[j].childNodes[k].nodeName == "body"){
								output  = xml.childNodes[i].childNodes[j].childNodes[k].firstChild.nodeValue + "<br>";
			document.getElementById("chat_area").innerHTML = output;
			alert("There was a problem retrieving the XML data:\n" + xml_obj.statusText + "\n" + xml_obj.responseText);
function get_xml(){
function send_xml(){
	u = document.send_form.uname.value;
	b = document.send_form.mbody.value;
	qs = "action=add_post&amp;from=" + escape(u) + "&amp;post=" + escape(b);

Put all the JavaScript (except the initial skeleton of messagesRetrieved()) into a file called chat.js and save it to the same folder as the HTML file and request.php.

Finishing Up

Assuming everything is set up the way I set it up on my server, you should have a rudimentary Ajax chat. Of course, there are some tweaks I highly suggest. The first being adding clip and overflow to the chat_area div. If done right, you can have a scrollable window much like an iframe.

The second thing to do is modify the script so it keeps track of the time the posts were made. Then the script could figure out which posts the user hasn't received and send only them. This will cut down on bandwidth, which is important. I let my first Ajax Chat run all day and I ended up doing three gigs of bandwidth. That's a lot considering it was all text files.

The third thing I would do is learn more about DOM scripting and get rid of those for loops in messagesRetrieved(). While they are agnostic and easy to add message items to, they aren't pretty.


I wrote this code and tested it on Safari and Firefox on Mac. It worked for me with those browsers. It might not work for you and you may have to tweak it heavily. I don't have time to support it. The code is free and it is yours to use. So, feel free to modify it and ask other people for help with it. If you want to be nice, link back to this page if you get it working.

Oh, and since this dynamically updates an area of the page, it probably isn't very accessible to screen readers. Don't say I didn't warn you.

Source Code

You can download all the files mentioned above in a zip file. They are uncommented, but the code is clean enough that you should be able to figure out what is going on.