Posts Tagged ‘Meta’

Old Stuff Up, New Stuff On The Way

Tuesday, January 20th, 2009

I finally finished importing all the relevant old content from Robertdots of yore. It amounted to about 36-ish posts. I’ve set up all the 301 redirects and pulled the 503 I’ve had running for a month or more.

I still have some editing to do on older stuff and I there are still some areas of the CSS that I need to address. The pressing stuff is done, though. In the days ahead, I’ll concern myself with new stuff.

Apologies For Old Content

Wednesday, January 14th, 2009

I started working on importing all the old content I planned on keeping on the site. So far, I’ve only posted two or three old stories, but I have another 30-something in the queue. So, I want to apologize to anyone who reads my news feeds for all this old content showing up. At this point, I don’t know how to make WordPress understand that these posts are old and need to (at least) be appended to the feed.

Until I get through all the old stuff (hopefully by the end of the week), please be patient.

Robertdot Relaunch

Wednesday, January 7th, 2009

Welcome to the relaunch of Robertdot. As you can see, there isn’t much here. I still have a ton of work importing relevant old posts, fleshing out the site, and debugging.

The reason for all the changes is to help take Robertdot away from its original purpose as personal blog and move it toward my new goal for the site as a web design blog. That means I’ve purged everything, and I’ll slowly bring back content relevant to the site while letting the unneeded parts fall away.

If you’re used to keeping up with my personal life here on Robertdot, you may want to take a look at my new personal site, Robert Brodrecht’s Vanity Site.

So, keep an eye out in the coming weeks for things to get finalized and Robertdot to turn into a finished product, rather than the live beta you see now.

A Rant: Something I’m Tired Of

Wednesday, August 13th, 2008

Recently, there have been a slough of “25 Great Whatevers” or “15 Awesome Ways To Do Something.” Smashing Magazine does this constantly, and SitePoint is guilty, too. Many of these articles aggregate links to the same tired sites and tout some little nook of it as very well done. I mean, how many times do I have to look at Avalon Star? Jeeeeesus.

My main problem is that these lists are a poor excuse for content. I would ignore them (easily, since I’m not subscribed to their feeds), but people like Paul Boag love them. So, I have to notice them, even though I don’t want to.

Ok. End Poorly Written Rant.

NYTimes Hand Codes

Wednesday, April 30th, 2008

For future reference, the Art Director says (search for “Visual Consistency”) they hand code their site. It still uses a loose DTD and table-based layout, but at least they aren’t using DreamWeaver.

Netscape Discontinued

Friday, December 28th, 2007

According to the BBC, Netscape Navigator is being discontinued. Thank God.

The article goes over a brief history of Netscape. BBC left out the fact that the languishing browser has been the laughing stock of the alternative browsers for some time. The farce that was Internet Explorer Mode in Netscape 8 was the final nail in the coffin for me. Mozilla and, later, FireFox have been much better choices for alternative browsers since the Gecko rendering engine was unleashed, and Internet Explorer was arguably preferable in the Netscape 4 days (what the hell is a layer, anyway?).

Good riddance, Netscape Navigator.

Who the Hell is Domain Design Shop?

Friday, September 21st, 2007

I ordered a couple domains from GoDaddy a few days ago. I wasn’t really surprised when I found an e-mail with the subject Important information about your domain. I was surprised at what was inside.

Congratulations on registering the domain: [removed for privacy]. Now is the time to establish an effective Internet presence. Domain Design Shop is an internationally recognized Web design company that specializes in marketing and branding. Our staff has assisted hundreds of companies, organizations, and individuals in achieving their goals of developing custom web sites and technology solutions.

Please click here to see our portfolio:

Warm Regards, [email:]

I’m not sure if these people are spamming me from my domain record or if they are in cahoots with GoDaddy. I’m guessing the former. I really need to set up a domains e-mail account.

So, here is this design company that spams new domain buyers in hopes that they will hire them. That is a really bad way to convince people of your services. But what the hell? I’ll take a look at their site.

I am greeted with a punch in the retina by an alarmingly green monstrosity. As usual, I look down at the bottom left of my browser at SafariTidy’s output. 6 warnings on their home page. I click to open the report. Each warning is about depreciated or proprietary attributes. The code is tag soup, and table based. None of the images have alt attributes. So far, these guys are spamming me, and showing me that the product they would provide me is complete shit.

Then I noticed that the color of the background image didn’t exactly match the background color of the document. The header image’s background color does match. So, there are all kinds of slightly different colored blocks floating around. It’s this sort of lack of attention to detail that separates the mediocre from the great. They also have a weird gray drop shadow-esque thing that I can’t figure out.

The thing that shines on their site is their logo work. I wish they would have included some one-color proofs, though. Other than that, their work isn’t particularly appealing to me. It is average, overall.

And, finally, the ISO Best Web Design 2006 badge on their site is… well, tacky. They don’t even provide a link to a press release. For all I know, it’s a stock graphic. I’ve never seen any other ISO Best Web Design badges. Since it dates the site to 2006, I’m sorry to say that the work done is not up to snuff for real professional designers. The lack of standards, offensive color scheme, and lack of attention to detail all point in the direction of Don’t Hire Me (ignoring the fact that they spammed me). In fact, they ought to be hiring me (or someone like me) to catch them up to 2005.

A Better Spam System

Wednesday, October 18th, 2006

Everyone hates spam. That is a lie. In order for spam to be worthwhile, some people must be using it (and they supposedly are). But I don’t and I don’t want it. I have a proposition (that, of course, will fail).

A new rash of spam has been flooding my inbox. My spam filter doesn’t pick it up because the text in it doesn’t say anything about viagra, porn, cialis, stocks, or anything else to get a guy going. It just has bits of novels or technical manuals and an image attached that has text about what they really want to sell. Supposedly, these spammers want to un-train spam filters and it might just work. I have to manually delete these e-mails so I don’t mess up my filters.

Everyone I’ve ever talked to hates spam. But some people are reading it and acting on it. If someone wasn’t, spammers wouldn’t bother. The problem is that more people don’t act on the spam. Some people, like me, recognize spam based on subject and senders and never even open it.

What about us?

Spammers buy large lists of e-mail addresses and send e-mail to everyone on the list, perhaps cross checking against list of known dead addresses. There is no way to get off these lists without deleting the e-mail account, which really does no good.

There was supposed to be a do-not-spam e-mail list or some such nonsense. This might work if spammers could be held to some law. Not every spammer is in the US. Not every spammer in the US spams through US servers. It’d be worthless and impossible to keep up with.

I propose the opposite. We need a do-spam list. Here’s how it would work.

First, spammers would login to some web site and enter every e-mail address they have. Any duplicates would be ignored. Then, every spammer would have the same large e-mail list. Spamming would continue as usual based on this list.

The database that holds the list would have three fields: email_id, address, and last_click.

Second, spammers would introduce unique URLs per-address. So, a link to the spam site would be something like When the recipient clicks this link, the recipients e-mail address would be found in the do-spam list and the last_click would be updated with the current date and time.

Third, a daily cron job would go over the list and delete every e-mail address that had a last_click older than, say, 30 days. Use of the list would continue as usual while the total number of uninterested parties shrunk.

Fourth, the spammer could optionally keep his own database based on the existing users in the do-spam list. The spammer’s database might track what sorts of products the particular recipient is interested in. For example, some guy might click on porn and stocks, but not viagra because the guy is 30 and has no problems with getting an erection. That data could be shared via affiliate programs, if desired. Then, spam could be more targeted and might convert better.

By simply removing people like me from the list due to inactivity, spammers would have a higher return on investment. It would create more targeted spam and drastically reduce the amount of money spammers spend on finding victims.

There are only two issues. The first is getting new addresses. These could be collected by traditional methods. The second is that spammers can’t be trusted to manage the list. So, I propose a disinterested organization run open spam e-mail relays that use the do-spam database. All spammers have access to is the unique ids for the e-mail addresses but not the e-mail addresses themselves. Spammers would have to register with the website, and probably pay a fee to make sure they are serious about spamming.

I imagine there are still bugs to work out, but I think this is a good start. Feel free to post your own thoughts, problems, and ideas on the subject.

Firefox Crop Circles

Monday, August 28th, 2006

When you get a bunch of geeks together, weird stuff happens.

A bunch of Oregon State University geeks got together to make a 220 foot in diameter crop circle of the FireFox logo.

Examining John C. Dvorak’s Anti-CSS Article

Tuesday, July 18th, 2006

We all know it. John C. Dvorak is a troll. He’s not just a troll. He’s an uninformed troll. In Why CSS Bugs Me, an article posted on June 12th, 2006, at the PC Magazine website, John C. Dvorak explains why CSS is useless.

The base assumption of his article is that he actually knows what he is talking about. John C. Dvorak is not a designer. He’s a tech columnist. His blog site, which he claims to be redesigning, is proof enough of that. It’s pretty clear that his inability to style a website should mean his opinion is worthless. Nevertheless, I found the article linked from CSS Insider. That means someone cared about his opinion.

I know I’m in for a treat when the opening line of an article is ended by an question mark and an exclamation point. Dvorak claims that none of this stuff works. That statement is absolutely wrong. A great deal of it works. In fact, almost everything works as expected. If I do a font-family:verdana, helvetica, sans-serif on a div tag, every important graphical browser that has been released since I started using the web will render the text in one of the listed fonts. It’s only when using advanced CSS that problems arise.

What John seems to be upset about is the PC Platform. Frankly, every major browser for the Mac is pretty standards compliant. Opera was a little bit of a problem before Opera 9 was released, but even Opera 8 rendered most things the same way Firefox and Safari did. The problem lies on the PC where one of the major browsers, which happens to be the most popular, is not standards compliant. That browser is Internet Explorer if you’ve been living in a hole. Microsoft even prides itself on how well it embraced and extended standards (for example, filters). The major problem for most designers is that the Internet Explorer rendering engine does the box model wrong. But, really, with conditional comments, making a bit of CSS for Internet Explorer is a trivial task.

The point is that he is picking on CSS instead of picking on the people who write the CSS rendering engines. He should be bad-mouthing Microsoft, not CSS.

I take that back. For a moment, he did take a strike at the defining principle of CSS: it cascades.

The first problem is the idea of cascading. It means what it says: falling – as in falling apart. You set a parameter for a style element, and that setting falls to the next element unless you provide it with a different element definition. This sounds like a great idea until you try to deconstruct the sheet. You need a road map. One element cascades from here, another from there. One wrong change and all hell breaks loose. If your Internet connection happens to lose a bit of CSS data, you get a mess on your screen.

I’ll ignore the part about the connection losing a bit of CSS data because I think it’s an asinine suggestion. I can only think of one situation where this would happen and it wouldn’t totally mess up the design if the CSS were written properly. What I want to address is the fact that what Dvorak is bitching about is how CSS was designed to work. The beauty of it is that I can declare a font-family on the body and it should cascade down into paragraph tags. I don’t have to explicitly set the value every time. Further, plain-old HTML had some amount of inheritance built in (for example wrapping a bold tag in a font tag will cause whatever is in the bold tag to inherit the font information). The cascading nature isn’t broken; rather, Dvorak’s understanding of CSS or his ability to use it is under developed. If he’s having trouble figuring out where the style is attributed from, it is probably the fault of the author or his ability to read CSS, not the fault of CSS. Well written CSS, like C or PHP, is quite easy to read if the reader knows what he is reading and the writer wrote it well.

John closes his article by throwing punches at the W3C.

And what’s being done about it? Nothing! Another fine mess from the standards bodies.

First, it’s not the W3C’s fault that the browsers don’t implement the standard properly. That’s like blaming law makers when criminals commit crimes. If there were only Mozilla and Internet Explorer, I might concede that the standard is unclear. However, Opera, Safari, and Mozilla typically agree on how a page should look. So, I can’t blame it on the specification.

Second, something is being done about it. Initiatives like the Web Standards Project have been busting ass for years to get browser makers to stick to standards. The web standards movement is making major footholds. The only major graphical browser that isn’t standards compliant is Internet Explorer. The IE7 team is working hard to make Internet Explorer more standards complaint. Pretty soon, the differences between how browsers render pages will be almost nil. The only difference will be how soon the browser makers integrate new CSS features like those in CSS 3 draft that are already being built into browser releases.

It seems John got his panties in a wad over this one. For once, I think this was a real rant rather than a calculated troll to boost the number of page views on his articles. Unfortunately, he comes across as a whining web design newbie. Those of us that have been doing standards-based web design for awhile have dealt with far worse that he is and we still appreciate CSS for the excellent tool it is.

If you want to read the article, it can be found on on PC Magazine.