“It Works”

This blog post, “The Worst Thing You Can Say About Software Is That It Works,” written by one Kenny Tilton, is pretty hilarious. This is the most beautiful thing I’ve read in a while:

if a pile of code does not work it is not software, we’ll talk about its merit when it works, OK? Therefore to say software works is to say nothing. Therefore anything substantive one can say about software is better than to say it works.

Reading this triggered flashbacks and PTSD. I’d mentioned to a manager recently that I wanted some time to do some badly needed refactoring. My explanation of why was met with a pause, then, “Let me get this straight. You want time to take something that already works, reorganize it, possibly break things, and we wouldn’t have anything new to even show for it?”

That last part was wrong–the value added comes from maintainability and extensibility, but I couldn’t get him to really grasp those ideas. He’s not a technology person. For all he knew, maybe this was an elaborate ruse on my part to be left undisturbed while I surfed porn at my desk for a few weeks.

I work in a very small shop with all non-technology people, so this sort of thing happens a lot. It’s frustrating. It’s sort of nice to know I’m not alone in encountering this mindset. But man… if even the fellow programmer in Kenny’s story doesn’t get it, I’m not sure there’s much hope for the rest of the world.

EAcceleratorCacheFunction = Cache_Lite_Function + EAccelerator

It’s pretty much all in the title. In a nutshell, EAcceleratorCacheFunction is a “memoizing” cache class for PHP that uses shared memory for storage. It is mostly compatible with Cache_Lite and Cache_Lite_Function.

Just like Cache_Lite_Function, it supports per-cache-object lifetime values, instead of specifying the lifetime of an item at the time you store it. This lets you dynamically change the lifetime of the cache. For example, if system load goes up and you don’t mind serving sightly older content instead of regenerating it:

$load = sys_getloadavg();
// use 5 min avg (ignore momentary spikes)
if($load[1] >= 6) {
    $lifetime = 900; # 15 min
} elseif($load[1] >= 3) {
    $lifetime = 600; # 10 min
} else {
    $lifetime = 300; # 5 min
}
$cache = new EAcceleratorCacheFunction(array('lifeTime' => $lifetime));
$cache->call('make_page');

I wrote EAcceleratorCacheFunction as a drop-in replacement for Cache_Lite_Function. On a virtual private server, doing cache reads/writes from memory instead of disk has made a noticeable difference in performance; it helps tremendously that the database has to contend with less disk I/O.

Two Styles of Caching (PHP’s Cache_Lite vs memcached)

Since the recent slashdotting of our website (we held up okay, but there’s always room for improvement), I’ve been investigating the possibility of moving from Cache_Lite (actually, Cache_Lite_Function) to memcached in our PHP code.

Much discussion comparing these solutions focuses on raw performance in benchmarks. In the real world, though, not all things outside the benchmark are equal. On a VPS, disk I/O times are notorious for being highly variable. This makes memcached all the more attractive. Yes, memory is faster than disk in almost every environment, but also, avoiding disk access conserves a precious resource so fewer processes must block for it.

A public mailing list post by one Brian Moon points this out exactly:

If you rolled your own caching system on the local filesystem, benchmarks would show that it is faster. However, what you do not see in benchmarks is what happens to your FS under load. Your kernel has to use a lot of resources to do all that file IO. […]

So, enter memcached. It scales much better than a file based cache. Sure, its slower. I have even seen some tests where its slower than the database. But, tests are not the real world. In the real world, memcached does a great job.

Okay, great. memcached is better when you take into account overall resources. But there’s a very useful Cache_Lite_Function feature that memcached doesn’t seem to have.

When you initialize a Cache_Lite_Function object, you set a “lifeTime” parameter, then use the call() method to wrap your regular function calls. If the output of the function hasn’t been cached within that time period, the call gets made and its results replaced in the cache with a new timestamp.

The cool thing about it is that you can create different cache objects pointing to the same directory store without a problem. Pages can increase and decrease the lifetime of the cache dynamically as load changes, so you can serve slightly older data from cache if necessary, keeping the site responsive while saving database queries. On a site where content changes relatively infrequently, this is a great feature to have: serve it fresh when load is low, serve from cache when load is high.

memcached, on the other hand, requires that you specify an expiration time at the time you place data in the cache. A retrieval call doesn’t let you specify a time period, so you can’t do the above. If data has expired, it’s expired.

It’d be interesting to hack Cache_Lite_Function to use memcached as its store, so you could get the best of both worlds. It would involve storing things in memcached with no expiration, tacking on a timestamp in the data, and doing the checking manually. But it might work.

There’s no such thing as a content management system

During a meeting at work today, someone remarked, “No one I know seems happy with their content management system.”

Somehow, that’s unsurprising. The problem, I think, is that there’s really no such thing as a content management system. Think about how absurd that term is. It’s a system (it’s organized and has structure) that manages (performs operations) on content (er, stuff). Well then… what piece of software isn’t a CMS?!

When people talk about a CMS, they really mean publishing software. The website I maintain was written specifically for managing news articles. It does its job reasonably well, despite needing some cleanup and refactoring. What’s devious about the term “CMS” is that people start to expect all sorts of things from it. After all, it manages content right? So why can’t it easily integrate with other sites, offer social networking features, do fancy AJAX tricks, and make dinner, with cpu cycles to spare?

The fact is, no software can do it all. There’s sometimes the wishful thinking that if we were using a pre-packaged CMS instead of a custom solution, we’d be better off. That’s just not true. A pre-packaged CMS can be a good option for simple needs, but customization is often a huge headache. The end result is that you’d have been better off writing something custom tailored to begin with. The most flexible (and therefore “best”) pre-packaged CMSes are often not ready-to-run software, but actually well-designed frameworks (like Zope) that require coding for the specific content you want to handle.

So why is no one happy with what they have? I suspect it’s because they didn’t give enough thought to what they wanted, or their expectations were too high, or both.

There’s nothing magical about a CMS. It follows the same rules as any other kind of software: the requirements for what it does should be clear, and the proper code abstractions should be in place. It’s like any other project: it should support a set of features, but also be able to change and grow easily. And you can only achieve those goals with proper planning and good code design. Not confusing lingo like “content management system.”

The Lifespan of Software

Rumors of Chandler’s Death Are Greatly Exaggerated. So says the renowned Phillip J. Eby.

In light of all the damning media scrutiny paid to Chandler in recent years, Phillip makes an excellent point: the project funded work on a bunch of important open source python libraries. I didn’t realize this—it drastically changed my regard for the OSAF‘s work. If this aspect of the project got mentioned more, I think Chandler would get a lot more respect. Even if Chandler 1.0 never sees the light of day, it’s already made major contributions to the python community.

Proprietary software has a definite lifespan: once a company has stopped developing and supporting it, that’s the end. For the company, value is localized and non-transferable in the closed source code base. The business model of selling software depends on this. Once the company kills off the product, the value more or less disappears. You can still use it, of course, but it will decrease in value as similar, hopefully better products appear on the market.

The value of open source software, on the other hand, isn’t limited to its immediate use. Even if an application is no longer actively used and maintained, the code can spark ideas, be used to fork a new project, serve as a lesson in design, etc. Its value can be perpetually renewed by virtue of the fact that it circulates in different ways. If it’s large enough, like Chandler or Zope, it can spawn mini-projects, components, and libraries for reuse.

Years ago, I wrote a Java version of a napster server. Just for fun. It was called jnerve, and I released the code as open source. I tried to get people to host it and use it, but opennap, the C implementation, was naturally faster, more efficient, and more mature. jnerve seemed like a dead end, so I stopped working on it. There were some cool architectural bits to it that were interesting to write, but I regarded the project as a failure.

Months later at a conference, I got a demo CD of some new peer-to-peer file sharing software. (“P2P” was all the rage then.) When I ran it, I was astounded to see a copyright message with my name on it. They had used my code as the basis for their commercial product! The code was able to live on in a different form. I’m not sure it was actually legal, given that jnerve was GPL, but I didn’t care enough to pursue the matter.