From Content to Community

Since the beginnings of the commercial web, people learned quickly that “content is king.” Appealing and unique content is a guarantee of raw traffic, and that hasn’t changed with Web 2.0.

What HAS changed is that traffic from content won’t necessarily result in return visitors and loyalty. Syndication feeds have made it increasingly easy to filter exposure to websites, so that a user can maximize only what they want to see. I won’t bother browsing around a website that’s got interesting content 75% of the time, when I can grab its feed and use my newsreader to view interesting content from many sources nearly 100% of the time.

Quality content needs vibrant community interaction around it to ensure that a website gets loyal return visitors. A lot of old media still hasn’t figured this out. They try to fool users with fancy-looking websites, attempting to masking the fact they’re still, well, old media.

One example is The San Francisco Chronicle’s upcoming redesign. While the visual feel is fairly clean and consistent, the page is horribly cluttered. The flawed rationale is pretty obvious: let’s put tons of crap on the screen and maybe someone will click something!

User feedback on the redesign is very mixed. I suspect that the positive responses are coming from non-tech savvy readers, people who are evaluating the layout based on its resemblance to a print newspaper. (They’ll soon change their minds when they can’t easily find anything.) That audience isn’t very large and it’s slowly dying out over time.

Interestingly, the negative responses aren’t just about layout clutter, but the lack of interactivity. Intelligent, web-savvy users aren’t interested in being passive readers. They want to be part of the news, to help shape it and to comment on it; they want their voices featured prominently on the site, and not ghettoized in tiny comments sections, sidebar polls, or letters to the editor. Being a truly integral part of a community makes engaging people feel appreciated, gives them a reason to come back, and makes them want to spread the word.

If Web 2.0 means anything at all, it means that people are realizing the web isn’t yet another publishing medium; it’s an interface for social interaction. And this means successful websites are increasingly distinguished by the kinds of community they foster, not just their content. In the world of technology news, for example, there are plenty of sites that publish decent, timely content, original or aggregated. Sure, they each have their own editorial styles, but in my mind, what truly separates them are the unique communities: Slashdot is mostly full of snarky, pro-Linux and anti-Microsoft ideologues; ars technica is a bit more neutral with a strong gamer and “power user” demographic; reddit tends to have good conversations about submitted links in their programming subsections.

There will always be a place for online newspapers and their model of publishing, but I think their core readership and audience will continue to decline, unless they’re willing to give up their monopoly on content production and focus on fostering distinctive communities.

Two Styles of Caching (PHP’s Cache_Lite vs memcached)

Since the recent slashdotting of our website (we held up okay, but there’s always room for improvement), I’ve been investigating the possibility of moving from Cache_Lite (actually, Cache_Lite_Function) to memcached in our PHP code.

Much discussion comparing these solutions focuses on raw performance in benchmarks. In the real world, though, not all things outside the benchmark are equal. On a VPS, disk I/O times are notorious for being highly variable. This makes memcached all the more attractive. Yes, memory is faster than disk in almost every environment, but also, avoiding disk access conserves a precious resource so fewer processes must block for it.

A public mailing list post by one Brian Moon points this out exactly:

If you rolled your own caching system on the local filesystem, benchmarks would show that it is faster. However, what you do not see in benchmarks is what happens to your FS under load. Your kernel has to use a lot of resources to do all that file IO. […]

So, enter memcached. It scales much better than a file based cache. Sure, its slower. I have even seen some tests where its slower than the database. But, tests are not the real world. In the real world, memcached does a great job.

Okay, great. memcached is better when you take into account overall resources. But there’s a very useful Cache_Lite_Function feature that memcached doesn’t seem to have.

When you initialize a Cache_Lite_Function object, you set a “lifeTime” parameter, then use the call() method to wrap your regular function calls. If the output of the function hasn’t been cached within that time period, the call gets made and its results replaced in the cache with a new timestamp.

The cool thing about it is that you can create different cache objects pointing to the same directory store without a problem. Pages can increase and decrease the lifetime of the cache dynamically as load changes, so you can serve slightly older data from cache if necessary, keeping the site responsive while saving database queries. On a site where content changes relatively infrequently, this is a great feature to have: serve it fresh when load is low, serve from cache when load is high.

memcached, on the other hand, requires that you specify an expiration time at the time you place data in the cache. A retrieval call doesn’t let you specify a time period, so you can’t do the above. If data has expired, it’s expired.

It’d be interesting to hack Cache_Lite_Function to use memcached as its store, so you could get the best of both worlds. It would involve storing things in memcached with no expiration, tacking on a timestamp in the data, and doing the checking manually. But it might work.

There’s no such thing as a content management system

During a meeting at work today, someone remarked, “No one I know seems happy with their content management system.”

Somehow, that’s unsurprising. The problem, I think, is that there’s really no such thing as a content management system. Think about how absurd that term is. It’s a system (it’s organized and has structure) that manages (performs operations) on content (er, stuff). Well then… what piece of software isn’t a CMS?!

When people talk about a CMS, they really mean publishing software. The website I maintain was written specifically for managing news articles. It does its job reasonably well, despite needing some cleanup and refactoring. What’s devious about the term “CMS” is that people start to expect all sorts of things from it. After all, it manages content right? So why can’t it easily integrate with other sites, offer social networking features, do fancy AJAX tricks, and make dinner, with cpu cycles to spare?

The fact is, no software can do it all. There’s sometimes the wishful thinking that if we were using a pre-packaged CMS instead of a custom solution, we’d be better off. That’s just not true. A pre-packaged CMS can be a good option for simple needs, but customization is often a huge headache. The end result is that you’d have been better off writing something custom tailored to begin with. The most flexible (and therefore “best”) pre-packaged CMSes are often not ready-to-run software, but actually well-designed frameworks (like Zope) that require coding for the specific content you want to handle.

So why is no one happy with what they have? I suspect it’s because they didn’t give enough thought to what they wanted, or their expectations were too high, or both.

There’s nothing magical about a CMS. It follows the same rules as any other kind of software: the requirements for what it does should be clear, and the proper code abstractions should be in place. It’s like any other project: it should support a set of features, but also be able to change and grow easily. And you can only achieve those goals with proper planning and good code design. Not confusing lingo like “content management system.”