“It Works”

This blog post, “The Worst Thing You Can Say About Software Is That It Works,” written by one Kenny Tilton, is pretty hilarious. This is the most beautiful thing I’ve read in a while:

if a pile of code does not work it is not software, we’ll talk about its merit when it works, OK? Therefore to say software works is to say nothing. Therefore anything substantive one can say about software is better than to say it works.

Reading this triggered flashbacks and PTSD. I’d mentioned to a manager recently that I wanted some time to do some badly needed refactoring. My explanation of why was met with a pause, then, “Let me get this straight. You want time to take something that already works, reorganize it, possibly break things, and we wouldn’t have anything new to even show for it?”

That last part was wrong–the value added comes from maintainability and extensibility, but I couldn’t get him to really grasp those ideas. He’s not a technology person. For all he knew, maybe this was an elaborate ruse on my part to be left undisturbed while I surfed porn at my desk for a few weeks.

I work in a very small shop with all non-technology people, so this sort of thing happens a lot. It’s frustrating. It’s sort of nice to know I’m not alone in encountering this mindset. But man… if even the fellow programmer in Kenny’s story doesn’t get it, I’m not sure there’s much hope for the rest of the world.

From Content to Community

Since the beginnings of the commercial web, people learned quickly that “content is king.” Appealing and unique content is a guarantee of raw traffic, and that hasn’t changed with Web 2.0.

What HAS changed is that traffic from content won’t necessarily result in return visitors and loyalty. Syndication feeds have made it increasingly easy to filter exposure to websites, so that a user can maximize only what they want to see. I won’t bother browsing around a website that’s got interesting content 75% of the time, when I can grab its feed and use my newsreader to view interesting content from many sources nearly 100% of the time.

Quality content needs vibrant community interaction around it to ensure that a website gets loyal return visitors. A lot of old media still hasn’t figured this out. They try to fool users with fancy-looking websites, attempting to masking the fact they’re still, well, old media.

One example is The San Francisco Chronicle’s upcoming redesign. While the visual feel is fairly clean and consistent, the page is horribly cluttered. The flawed rationale is pretty obvious: let’s put tons of crap on the screen and maybe someone will click something!

User feedback on the redesign is very mixed. I suspect that the positive responses are coming from non-tech savvy readers, people who are evaluating the layout based on its resemblance to a print newspaper. (They’ll soon change their minds when they can’t easily find anything.) That audience isn’t very large and it’s slowly dying out over time.

Interestingly, the negative responses aren’t just about layout clutter, but the lack of interactivity. Intelligent, web-savvy users aren’t interested in being passive readers. They want to be part of the news, to help shape it and to comment on it; they want their voices featured prominently on the site, and not ghettoized in tiny comments sections, sidebar polls, or letters to the editor. Being a truly integral part of a community makes engaging people feel appreciated, gives them a reason to come back, and makes them want to spread the word.

If Web 2.0 means anything at all, it means that people are realizing the web isn’t yet another publishing medium; it’s an interface for social interaction. And this means successful websites are increasingly distinguished by the kinds of community they foster, not just their content. In the world of technology news, for example, there are plenty of sites that publish decent, timely content, original or aggregated. Sure, they each have their own editorial styles, but in my mind, what truly separates them are the unique communities: Slashdot is mostly full of snarky, pro-Linux and anti-Microsoft ideologues; ars technica is a bit more neutral with a strong gamer and “power user” demographic; reddit tends to have good conversations about submitted links in their programming subsections.

There will always be a place for online newspapers and their model of publishing, but I think their core readership and audience will continue to decline, unless they’re willing to give up their monopoly on content production and focus on fostering distinctive communities.

Mashup: Google Maps + Amazon Reviews

Saturday morning I woke up with a rather odd idea: to create a mashup of google maps and Amazon book reviews. I was mildly surprised to discover it hadn’t been done yet. Here’s the result of spending a chunk of the weekend writing it in python (11/15 Update: should now work in Safari, Firefox2, and IE6):

http://mapreviews.codefork.com

It didn’t take long to write, since I’d recently been investigating the google maps API for a friend, and I’d also been doing bits of Amazon integration for a project. The biggest pain was dealing with the 1 call per second per IP address rate limiting mechanism of Amazon’s ECS API: the service returns error codes if you exceed the limit. So the application is pretty slow, especially if others are also using it.

But if you’re patient, it’s fun to watch the map markers gradually pop up. It’s also fun to look up a book with strong political leanings, and see how the ratings are distributed geographically. For example, you can look at The Communist Manifesto by Marx, and Bill O’Reilly’s Culture Warrior (*shudder*). Data for both are cached, so they should load very fast.

Who, me? The problem with a “do not call” list

Should there be a federally regulated “do not track” list for the internet, similar to the existing “do not call” lists? There’s an angle to this issue that I think proponents are missing.

As at least one person has already pointed out, the internet doesn’t work like a telephone system. It makes sense to say “do not call me”, since the “me” is the phone number. But how do you identify the “me” who’s using the web? Schemes using IP addresses and browser cookies aren’t adequate, since they can often be shared by several people.

Contextual advertising tries to make smart guesses about what might interest the user, but it’s only as good as its assumptions about whether it’s the same individual who generated the browsing patterns. The fact that advertisers are constantly extending their networks to probe more data and perpetually improving their algorithms speaks to how difficult this problem of identification is.

This is not simply a technical problem, but one that has broader social ramifications. The crux of it is this: in order to say “do not track me,” there needs to be a “me.” Supporters of this initiative are, in effect, implicitly also supporting the creation of a strong identification mechanism. Any federal regulation would need such an id in order to sign people up. Otherwise, how would you? Who’s the “me” that advertisers shouldn’t track?

A “do not track” list might successfully limit advertisers’ collection of web usage data, but it would certainly also improve government’s ability to do so. Would privacy really be improved then? The more practical solution is to encourage people to make use of ad-blockers and secure channels, and to educate them on how to be more savvy web users.