From Content to Community

Since the beginnings of the commercial web, people learned quickly that “content is king.” Appealing and unique content is a guarantee of raw traffic, and that hasn’t changed with Web 2.0.

What HAS changed is that traffic from content won’t necessarily result in return visitors and loyalty. Syndication feeds have made it increasingly easy to filter exposure to websites, so that a user can maximize only what they want to see. I won’t bother browsing around a website that’s got interesting content 75% of the time, when I can grab its feed and use my newsreader to view interesting content from many sources nearly 100% of the time.

Quality content needs vibrant community interaction around it to ensure that a website gets loyal return visitors. A lot of old media still hasn’t figured this out. They try to fool users with fancy-looking websites, attempting to masking the fact they’re still, well, old media.

One example is The San Francisco Chronicle’s upcoming redesign. While the visual feel is fairly clean and consistent, the page is horribly cluttered. The flawed rationale is pretty obvious: let’s put tons of crap on the screen and maybe someone will click something!

User feedback on the redesign is very mixed. I suspect that the positive responses are coming from non-tech savvy readers, people who are evaluating the layout based on its resemblance to a print newspaper. (They’ll soon change their minds when they can’t easily find anything.) That audience isn’t very large and it’s slowly dying out over time.

Interestingly, the negative responses aren’t just about layout clutter, but the lack of interactivity. Intelligent, web-savvy users aren’t interested in being passive readers. They want to be part of the news, to help shape it and to comment on it; they want their voices featured prominently on the site, and not ghettoized in tiny comments sections, sidebar polls, or letters to the editor. Being a truly integral part of a community makes engaging people feel appreciated, gives them a reason to come back, and makes them want to spread the word.

If Web 2.0 means anything at all, it means that people are realizing the web isn’t yet another publishing medium; it’s an interface for social interaction. And this means successful websites are increasingly distinguished by the kinds of community they foster, not just their content. In the world of technology news, for example, there are plenty of sites that publish decent, timely content, original or aggregated. Sure, they each have their own editorial styles, but in my mind, what truly separates them are the unique communities: Slashdot is mostly full of snarky, pro-Linux and anti-Microsoft ideologues; ars technica is a bit more neutral with a strong gamer and “power user” demographic; reddit tends to have good conversations about submitted links in their programming subsections.

There will always be a place for online newspapers and their model of publishing, but I think their core readership and audience will continue to decline, unless they’re willing to give up their monopoly on content production and focus on fostering distinctive communities.

Mashup: Google Maps + Amazon Reviews

Saturday morning I woke up with a rather odd idea: to create a mashup of google maps and Amazon book reviews. I was mildly surprised to discover it hadn’t been done yet. Here’s the result of spending a chunk of the weekend writing it in python (11/15 Update: should now work in Safari, Firefox2, and IE6):

http://mapreviews.codefork.com

It didn’t take long to write, since I’d recently been investigating the google maps API for a friend, and I’d also been doing bits of Amazon integration for a project. The biggest pain was dealing with the 1 call per second per IP address rate limiting mechanism of Amazon’s ECS API: the service returns error codes if you exceed the limit. So the application is pretty slow, especially if others are also using it.

But if you’re patient, it’s fun to watch the map markers gradually pop up. It’s also fun to look up a book with strong political leanings, and see how the ratings are distributed geographically. For example, you can look at The Communist Manifesto by Marx, and Bill O’Reilly’s Culture Warrior (*shudder*). Data for both are cached, so they should load very fast.

Who, me? The problem with a “do not call” list

Should there be a federally regulated “do not track” list for the internet, similar to the existing “do not call” lists? There’s an angle to this issue that I think proponents are missing.

As at least one person has already pointed out, the internet doesn’t work like a telephone system. It makes sense to say “do not call me”, since the “me” is the phone number. But how do you identify the “me” who’s using the web? Schemes using IP addresses and browser cookies aren’t adequate, since they can often be shared by several people.

Contextual advertising tries to make smart guesses about what might interest the user, but it’s only as good as its assumptions about whether it’s the same individual who generated the browsing patterns. The fact that advertisers are constantly extending their networks to probe more data and perpetually improving their algorithms speaks to how difficult this problem of identification is.

This is not simply a technical problem, but one that has broader social ramifications. The crux of it is this: in order to say “do not track me,” there needs to be a “me.” Supporters of this initiative are, in effect, implicitly also supporting the creation of a strong identification mechanism. Any federal regulation would need such an id in order to sign people up. Otherwise, how would you? Who’s the “me” that advertisers shouldn’t track?

A “do not track” list might successfully limit advertisers’ collection of web usage data, but it would certainly also improve government’s ability to do so. Would privacy really be improved then? The more practical solution is to encourage people to make use of ad-blockers and secure channels, and to educate them on how to be more savvy web users.

Is “race” a valid scientific construct?

James Watson, of Watson and Crick fame, has resigned this morning (Race row DNA scientist quits lab). No doubt it was due to the controversy over his incredibly offensive comments about people of African descent:

He was quoted as saying he was “inherently gloomy about the prospect of Africa” because “all our social policies are based on the fact that their intelligence is the same as ours – whereas all the testing says not really”.

He later issued a pretty feeble apology; actually more of a statement that he’d been misunderstood. On the subject of whether Africans were genetically inferior, he retracted his earlier statement: “there is no scientific basis for such a belief.”

The most interesting discussions so far about this incident resolve around the question: Is “race” a valid scientific construct for genetic research, or is it merely a social construct? It’s a deceptive question whose answer probably isn’t either/or: for example, I think it’s been shown that certain populations correlated with “race” are more genetically prone to having particular medical conditions.

I’m a skeptic when it comes to science. For me, the deeper question is: when does race get used in research and when doesn’t it? For what purposes? Science rarely takes place (if ever) in a vacuum of objectivity. Research gets funded, often in order to support human decisions about something. Note that Watson’s original comments reference “social policies.” He also mentioned what employers tend to think about the intelligence of people of African descent.

This is what’s truly scary about genetics in this day and age. It definitely can be a tool for helping people. But I suspect the greater likelihood is that it’ll be a tool for deciding who’s smart/able, who’s more deserving of opportunity, even who’s more deserving of a chance to live (health insurance companies love this stuff!).