Archive for the ‘Meta’ Category

A Better Spam System

Wednesday, October 18th, 2006

Everyone hates spam. That is a lie. In order for spam to be worthwhile, some people must be using it (and they supposedly are). But I don’t and I don’t want it. I have a proposition (that, of course, will fail).

A new rash of spam has been flooding my inbox. My spam filter doesn’t pick it up because the text in it doesn’t say anything about viagra, porn, cialis, stocks, or anything else to get a guy going. It just has bits of novels or technical manuals and an image attached that has text about what they really want to sell. Supposedly, these spammers want to un-train spam filters and it might just work. I have to manually delete these e-mails so I don’t mess up my filters.

Everyone I’ve ever talked to hates spam. But some people are reading it and acting on it. If someone wasn’t, spammers wouldn’t bother. The problem is that more people don’t act on the spam. Some people, like me, recognize spam based on subject and senders and never even open it.

What about us?

Spammers buy large lists of e-mail addresses and send e-mail to everyone on the list, perhaps cross checking against list of known dead addresses. There is no way to get off these lists without deleting the e-mail account, which really does no good.

There was supposed to be a do-not-spam e-mail list or some such nonsense. This might work if spammers could be held to some law. Not every spammer is in the US. Not every spammer in the US spams through US servers. It’d be worthless and impossible to keep up with.

I propose the opposite. We need a do-spam list. Here’s how it would work.

First, spammers would login to some web site and enter every e-mail address they have. Any duplicates would be ignored. Then, every spammer would have the same large e-mail list. Spamming would continue as usual based on this list.

The database that holds the list would have three fields: email_id, address, and last_click.

Second, spammers would introduce unique URLs per-address. So, a link to the spam site would be something like http://getsomeviagra.com/?uid=3476. When the recipient clicks this link, the recipients e-mail address would be found in the do-spam list and the last_click would be updated with the current date and time.

Third, a daily cron job would go over the list and delete every e-mail address that had a last_click older than, say, 30 days. Use of the list would continue as usual while the total number of uninterested parties shrunk.

Fourth, the spammer could optionally keep his own database based on the existing users in the do-spam list. The spammer’s database might track what sorts of products the particular recipient is interested in. For example, some guy might click on porn and stocks, but not viagra because the guy is 30 and has no problems with getting an erection. That data could be shared via affiliate programs, if desired. Then, spam could be more targeted and might convert better.

By simply removing people like me from the list due to inactivity, spammers would have a higher return on investment. It would create more targeted spam and drastically reduce the amount of money spammers spend on finding victims.

There are only two issues. The first is getting new addresses. These could be collected by traditional methods. The second is that spammers can’t be trusted to manage the list. So, I propose a disinterested organization run open spam e-mail relays that use the do-spam database. All spammers have access to is the unique ids for the e-mail addresses but not the e-mail addresses themselves. Spammers would have to register with the website, and probably pay a fee to make sure they are serious about spamming.

I imagine there are still bugs to work out, but I think this is a good start. Feel free to post your own thoughts, problems, and ideas on the subject.

Firefox Crop Circles

Monday, August 28th, 2006

When you get a bunch of geeks together, weird stuff happens.

A bunch of Oregon State University geeks got together to make a 220 foot in diameter crop circle of the FireFox logo.

Examining John C. Dvorak’s Anti-CSS Article

Tuesday, July 18th, 2006

We all know it. John C. Dvorak is a troll. He’s not just a troll. He’s an uninformed troll. In Why CSS Bugs Me, an article posted on June 12th, 2006, at the PC Magazine website, John C. Dvorak explains why CSS is useless.

The base assumption of his article is that he actually knows what he is talking about. John C. Dvorak is not a designer. He’s a tech columnist. His blog site, which he claims to be redesigning, is proof enough of that. It’s pretty clear that his inability to style a website should mean his opinion is worthless. Nevertheless, I found the article linked from CSS Insider. That means someone cared about his opinion.

I know I’m in for a treat when the opening line of an article is ended by an question mark and an exclamation point. Dvorak claims that none of this stuff works. That statement is absolutely wrong. A great deal of it works. In fact, almost everything works as expected. If I do a font-family:verdana, helvetica, sans-serif on a div tag, every important graphical browser that has been released since I started using the web will render the text in one of the listed fonts. It’s only when using advanced CSS that problems arise.

What John seems to be upset about is the PC Platform. Frankly, every major browser for the Mac is pretty standards compliant. Opera was a little bit of a problem before Opera 9 was released, but even Opera 8 rendered most things the same way Firefox and Safari did. The problem lies on the PC where one of the major browsers, which happens to be the most popular, is not standards compliant. That browser is Internet Explorer if you’ve been living in a hole. Microsoft even prides itself on how well it embraced and extended standards (for example, filters). The major problem for most designers is that the Internet Explorer rendering engine does the box model wrong. But, really, with conditional comments, making a bit of CSS for Internet Explorer is a trivial task.

The point is that he is picking on CSS instead of picking on the people who write the CSS rendering engines. He should be bad-mouthing Microsoft, not CSS.

I take that back. For a moment, he did take a strike at the defining principle of CSS: it cascades.

The first problem is the idea of cascading. It means what it says: falling – as in falling apart. You set a parameter for a style element, and that setting falls to the next element unless you provide it with a different element definition. This sounds like a great idea until you try to deconstruct the sheet. You need a road map. One element cascades from here, another from there. One wrong change and all hell breaks loose. If your Internet connection happens to lose a bit of CSS data, you get a mess on your screen.

I’ll ignore the part about the connection losing a bit of CSS data because I think it’s an asinine suggestion. I can only think of one situation where this would happen and it wouldn’t totally mess up the design if the CSS were written properly. What I want to address is the fact that what Dvorak is bitching about is how CSS was designed to work. The beauty of it is that I can declare a font-family on the body and it should cascade down into paragraph tags. I don’t have to explicitly set the value every time. Further, plain-old HTML had some amount of inheritance built in (for example wrapping a bold tag in a font tag will cause whatever is in the bold tag to inherit the font information). The cascading nature isn’t broken; rather, Dvorak’s understanding of CSS or his ability to use it is under developed. If he’s having trouble figuring out where the style is attributed from, it is probably the fault of the author or his ability to read CSS, not the fault of CSS. Well written CSS, like C or PHP, is quite easy to read if the reader knows what he is reading and the writer wrote it well.

John closes his article by throwing punches at the W3C.

And what’s being done about it? Nothing! Another fine mess from the standards bodies.

First, it’s not the W3C’s fault that the browsers don’t implement the standard properly. That’s like blaming law makers when criminals commit crimes. If there were only Mozilla and Internet Explorer, I might concede that the standard is unclear. However, Opera, Safari, and Mozilla typically agree on how a page should look. So, I can’t blame it on the specification.

Second, something is being done about it. Initiatives like the Web Standards Project have been busting ass for years to get browser makers to stick to standards. The web standards movement is making major footholds. The only major graphical browser that isn’t standards compliant is Internet Explorer. The IE7 team is working hard to make Internet Explorer more standards complaint. Pretty soon, the differences between how browsers render pages will be almost nil. The only difference will be how soon the browser makers integrate new CSS features like those in CSS 3 draft that are already being built into browser releases.

It seems John got his panties in a wad over this one. For once, I think this was a real rant rather than a calculated troll to boost the number of page views on his articles. Unfortunately, he comes across as a whining web design newbie. Those of us that have been doing standards-based web design for awhile have dealt with far worse that he is and we still appreciate CSS for the excellent tool it is.

If you want to read the article, it can be found on on PC Magazine.