Category Archives: software

A Year of Stats for

A year ago, I wrote an OpenRefine reconciliation service that queries VIAF and posted the source code on github. I’ve also been hosting it publicly at for anyone to use.

Below are two charts showing a year’s worth of usage statistics for this service. The first chart counts web requests made. (The way OpenRefine works, a single web request contains up to 10 name reconciliation queries. 200 web requests can translate to as many as 2000 name reconciliations.) The second chart counts the unique hosts (ie. unique computers, more or less) that used the service.

Last month, the busiest one yet, 47 different computers made an average of approximately 2360 name reconciliations each!

This usage has certainly exceeded my expectations. Now and then, I get a very nice tweet or two from someone who’s used it, which is really gratifying, considering it was just a little side project I threw together. You just never know.

Taking A Fresh Look at PHP

I’ve recently started working on a PHP/Laravel project.

PHP isn’t new to me. Many years ago, I wrote a very simple online catalog and shopping cart, from scratch, for a friend who had his own business as a rare book dealer. He used it with much success for several years. I’d also done a bit of hacking on some Drupal plugins.

Coming back to PHP now, I’m finding myself in a world MUCH different than the one I’d left.

First off, let’s admit that PHP comes with a lot of baggage. For a long time, “real” programmers shunned PHP because it was born as a language cobbled together to do simple web development but not much more. Its ease of use, combined with the fact that it was easy to deploy on commodity web hosting, meant you could find PHP talent for relatively cheap to build your applications. The stereotype was that PHP developers relied on a lot of patchy copy-and-paste solutions to build shoddy and insecure websites.

A LOT has happened since then. Here’s what I’ve encountered so far, diving back into PHP:

Object-orientation: PHP has had objects for a long time, but more recent features like namespaces, traits, and class autoloading have made newer PHP projects very strongly object-oriented. You can even find books on design patterns for PHP.

To me, this is the single most important positive change to the PHP world. The culture has changed from an ad hoc procedural mindset to more sophisticated thinking about coding for large-scale architectures.

Frameworks: Several major MVC frameworks exist, many of them drawing inspiration from Rails.

Performance: As of 5.5, PHP has a built-in opcode cache, making it much more performant. An alternative to core PHP is the HHVM project, backed by Facebook, which is a high-performance PHP implementation. HHVM has had a “rising tide” effect: the forthcoming PHP7 is supposed to be as fast as HHVM. So whatever you use, you can expect good performance at scale.

Tooling: There is sophisticated tooling like composer and a vibrant ecosystem of packages. While you can still deploy PHP applications the old way, using Apache and mod_php, there is a mature FastCGI Process Manager (PHP-FPM) engine that isolates PHP processes from the web server. PHP-FPM allows Apache/nginx/whatever web server to handle static content while a pool of processes handles PHP requests. This results in much more efficient memory usage and better performance.

Success: Many respectable, high-profile products have been built using PHP: WordPress, Drupal, and Facebook, just to name a few.

But all this is just to state a bunch of known facts. To me, the biggest suprise has been in the EXPERIENCE of beginning to write code again in PHP and using Laravel: what does that FEEL like?

In a word, it feels like Java, minus the strong typing. This is an entirely good thing in my opinion, despite criticisms that PHP technologies have become too complex and overdesigned.

The biggest paradigm difference between PHP and other popular web application back-ends is that nothing remains loaded in memory between requests. It’s true that opcode caching means PHP doesn’t have to re-parse PHP source code files to opcodes every time, which speeds things up greatly, but the opcodes still need to be executed for each request, from bootstrapping the application to finishing the HTTP response. In practice, this doesn’t actually matter, but it’s such a significant under-the-hood difference from, say, Django and Rails, that I find myself thinking about it from time to time.

It’s reassuring that when I scour the interwebs researching something PHP-related, I’m finding a lot of informed technical discussions and smart people who have come to PHP from other languages and backgrounds. It bodes well for the strengths and the future of the technology.

On Magic

Kids, I hate to break it to you, but there is no such thing as magic.

The cool whizzy stuff on your screen that impresses you: that’s the result of work. The button that was broken yesterday, that now works correctly today: also the product of work. The screen that was discussed in a meeting last week that suddenly appeared today on the development server: yup, work. When you look for a feature in the web application and it isn’t there, there’s this thing that can create it and put it there: it’s called work.

Someday we’ll all get over the mystifying aura of technology. Someday people will learn to recognize that programmers are not magicians, just workers, and that the work they do involves mundane, non-magical tasks, like wrestling with code libraries and frameworks to get them to do what we want, reorganizing files to make sure stuff exists in sensible places, and figuring out what to do when changing one piece affects three other pieces in unexpected ways.

And this means, someday, people will understand that, like any other kind of work, software development takes resources (namely, time!), not a magic wand. And no amount of “ambition” (read: wishful thinking) can really change that basic equation. You can pretend magic exists, but that doesn’t make it so. You aren’t fooling anyone. You just look childish.

When software development is recognized as work, there can be clarity about what is possible with a given set of resources. Then tasks can be sanely identified, specified, prioritized, coordinated, scheduled, executed, completed.

And then some really cool things can happen. Not magical things, but really cool things. Great things, even. The kind of great things that result from understanding, dedication, and hard work.

Adventures in Docker


After you’ve taken the time to puzzle through what it is exactly, Docker is nothing short of life changing.

In a nutshell, Docker lets you run programs in separate “containers”, isolating the dependencies each one requires. This is similar to virtualization solutions like VMWare and Virtualbox but Docker is a much more fine grained, customizable tool that operates at a different level.

It took me a week of experimentation to develop a firm grasp of the Docker concepts and how all its pieces work together in practice. I’m now using it at work for development, and I hope to be setting up a configuration for staging soon.

This is a short write-up of what I’ve learned so far.

The Concepts

On a first glance, almost everyone (including me) mistakes Docker as another virtualization technology. It’s not. Virtualization lets you run a machine within a machine. A Docker container is more subtle: it segments or isolates part of your existing Linux operating system.

A container consists of a separate memory space and filesystem (taken from something called an image). A container actually uses the same kernel as your “host” system. Some fancy Linux kernel technologies allow all of this to happen; there is no hardware virtualization going on.

You start using Docker by creating an image or using an existing one made by someone else. An image is a filesystem snapshot. You can build images in an automated fashion using a Dockerfile, which allows you to “script” the running of commands to do things like install software and copy files around.

When you launch a container, Docker makes a “copy” of an image (well, not really, but we’ll pretend for now) and uses that to start a process. The fact that the filesystem is a “copy” is important: if you launch 2 containers using the same image, and the processes in each modify files, those changes happen in each CONTAINER’s filesystem; the image doesn’t change. Processes inside containers only see what’s in the container, so they are isolated from one another. This allows complete dependency separation, at the level of processes.

You can do a lot with containers. You can run multiple processes in them (though this is discouraged). You can start another process in an already running container, so it can interact with the already running process. After a container has stopped (by halting the process running in it), you can start it back up again. Again, any file changes made are in the container’s filesystem; the image remains unchanged.


There are three issues I’ve personally encountered so far in using Docker:

1) Persistent Storage

Containers are meant to be transient. In a well designed setup, you should, theoretically, be able to spin up new containers and discard old ones all the time, and not lose any data. This means that your persistent storage has to live somewhere else, in one of two places: a special “data container” or a directory mounted from the host filesystem.

Data containers were too complicated and weird, and I couldn’t get them to work the way I expected, so I mounted directories instead. This has the nice side effect that, as Dockerized processes change files, you can see those changes immediately from the host without having to do anything special to access them. I’m not sure, however, what “best practices” are concerning storage.

2) Multi-Container Applications

Many modern applications consist of several processes or require other applications. For example, my project at work consists of a Rails web app, a delayed_job worker process, an Apache Solr instance, and a MySQL database.

Since Docker strongly recommends a one-process-per-container configuration, you need a way to coordinate a set of running processes and make sure they can communicate with one another. Docker Compose does this, allowing you to easily specify whether containers should be able to open connections to each other’s network ports, share filesystems, etc.

Currently, Docker Compose is not yet considered “production ready.” While it addresses the need to orchestrate processes, there is also the problem of monitoring and restarting processes as needed. It’s not clear to me yet what the best tool is for doing this (it may even be a completely orthogonal concern to what Docker does).

3) Running Commands

Sometimes you need to use the Rails CLI to do things like run database migrations or a rake task. Running commands takes a bit of extra effort, since they need to happen in a container. A slight complication is whether to run the command in the existing Rails container or to start another one entirely for that new process. It’s a bit of typing, which is annoying.

The Payoffs

How is Docker life changing? There are several scenarios I encounter ALL THE TIME that Docker helps tremendously with.

* On the same machine, you can run several web applications, which each require different versions of, say, Ruby, Rails, MySQL and Apache, without dealing with software conflicts and incompatibilities.

* Related to the previous point, Docker lets you more easily experiment with different software without polluting your base system with a lot of crap you might never use again.

* There is MUCH less overhead and less wasted memory usage than with virtualization. If you allocate 1GB of RAM to a virtual machine but only use 512MB, the other half goes to waste. Docker containers only use as much memory as the processes themselves take up, plus a small bit of overhead. Since Docker uses unionfs to “copy” (really, overlay) images to container filesystems, the actual disk space used isn’t as much as you might think.

* Since Docker containers are entirely self-contained, they can be deployed in development, staging, and production environments with almost NO differences among them, thereby minimizing the problems that usually arise.

For me, a lot of the benefits boil down to this: virtualization is amazing, but in practice, I don’t use it that much because it’s too heavyweight and too coarse for my needs. A Virtualbox is a whole other machine that I need to think about. By working at the level of Linux processes, Docker is exactly the right kind of tool for managing application dependencies.

A cautionary note: there’s a lot of buzz right now around containers, including efforts at defining vendor-neutral standards, such as appc. Although Docker releases have been rapid and it is already seeing a lot of adoption, it feels bleeding edge to me. It’s exciting but in a few years, it’s entirely possible that another container solution might surpass it to become the de facto standard. The playing field is just too new, which means Docker comes with some risk. But it’s well worth exploring at this early stage, even if only to get a taste of the new ideas shaping systems configuration and deployment that are definitely here to stay.

Software Old and New

Waaaay back in middle school, I used WordPerfect 5.1 to type up book reports and other homework assignments. This was on a Tandy 1000, one of the first home computers. Having never used a PC before, much less word processing software, it took some time to learn. WordPerfect came with a plastic template you laid above the keyboard’s F-keys, which told you what pressing various key combinations did. In my ignorance, I hit Enter twice at the end of every line of text to get double line spacing, which, of course, made editing and revising a nightmare. My uncle, a computer wiz, laughed when he saw this, and taught me how to set the line spacing the right way. It amazed me that the computer could reflow the text automatically.

A lesson I learned from this was that the manual that came with the 3.5″ disks was pretty darn useful.

Back then, in the late 80s and early 90s, software was a specialized tool or instrument. I was fortunate to have a computer at home. Not everyone did. To use it proficiently, you had to do some learning. This was expected. It wasn’t WordPerfect’s fault that I didn’t even know line spacing existed as a feature. Like learning any powerful tool, it required some time and effort to develop the skills.

There’s been a drastic paradigm shift over the last 25 years. Software has become ubiquitous. It’s no longer just the programs you run on your home or work computer. It’s on our phones and tablets. It’s what web applications are made of. It’s in cars, ATMs, information kiosks, and home appliances. Commercial software rarely comes with user manuals anymore. My smartphone came with a single sheet of paper showing you how to turn it on. When there are Android updates, I don’t get a book that explains the additional gestures it now recognizes, what the new icons mean, or how the menus have been restructured. I’m expected to just poke around the new interface until I can do what I’m trying to do. When you visit a new website you haven’t been to before, you are similarly expected to already know how to navigate it. This is possible because there are common conventions around software features and interface design, so that, when using a new piece of software, you are not starting completely from scratch.

The consequence of this radical shift is that if you can’t immediately use a new piece of software, there are 2 possible explanations: 1) you are lacking a general “digital literacy” which most people are understood to have (as opposed to specialized knowledge), or 2) the software is crappy.

We take pity on digital illiterates, but we have no sympathy or patience for crappy software. “Why does it take me 3 clicks to get to X? Why doesn’t this application do Y? Why doesn’t the icon resemble this, instead of that?” These complaints are commonplace. Increasingly, it doesn’t seem to matter what the software actually does or what the level of its inherent complexity might be. The pace of technological change and the pressures of high-tech business have made it important for users to be able to use software immediately, and to be satisfied enough that they don’t run off to a competitor’s product. Our intolerance is a direct result of this frenzied climate, which has taken user-friendliness to the extreme of trying to be all things to all people (or at least, as many things to as many people as possible).

The problem is that there is a lot of variability in user preferences, opinions, and needs. The more that software tries to accommodate a wide variety of these concerns, the less useful it becomes as a tool. I think you see this especially in many mobile apps and websites. They DO very little, but they go out of their way to make it easy to do it. This focus on ease is deceptive. It leads to a false sense of empowerment. We are surrounded by software everywhere that appears to enable us to do all sorts of things, but we actually don’t understand enough to know how to operate things skillfully. We just click and swipe, click and swipe, and get frustrated when magic doesn’t happen.

Using technology as a tool can save significant work and allow us to do things not possible before. But that doesn’t necessarily imply that it is or should be easy. It’s a subtle but important difference. Knowing how to fly an airplane enables you to traverse thousands of miles in a few hours, but that doesn’t mean operating one is easy, or that should be. One should be trained to be a skilled pilot, so that she can make the machine do all the complex things it needs to, in a variety of situations. One shouldn’t expect a cockpit that lets anyone to marginally be able to fly a plane. Because how far is that going to get you, really?

Where To Find Info When Packages Break in Debian Testing

The chromium package in Debian testing broke a few days ago. After I ran “apt-get update” and “apt-get upgrade”, chromium disappeared from my Xfce menu, and the executable was gone from my system. Nothing like that has ever happened to me before. Odd!

When I tried to re-install it by running “apt-get install chromium”, I got the following error:

The following packages have unmet dependencies:
chromium : Depends: libudev0 (>= 146) but it is not installable
E: Unable to correct problems, you have held broken packages.

Indeed, there is no package called libudev0 (there is, however, a libudev1, which I already had installed). Mysterious.

Being fairly new to Debian testing, I was at a loss as to what to do. After some googling, I discovered some information that’s useful to users trying to troubleshoot broken packages.

I already knew that Debian has a searchable package database on their website. If you search for ‘chromium’ in the testing distribution, you’ll get to a page for it.

What I’d never noticed before were the links on the right-hand side. Every package apparently has its own mailing list archive and QA page.

The QA page isn’t the easily thing in the world to make sense of. I couldn’t find a simple listing of bugs in reverse chronological order, which would let me quickly see the newest bugs filed. The closest thing is the list of all open bugs. There is also a dashboard page which is vaguely reverse chronological, though it may be sorted by priority; it’s not clear.

In any event, this was good enough. I could see the bug for the error message I was getting. Turns out an update had mistakenly built the package for stable, which is why the unmet dependency was coming up.

It’s yet to be fixed, but at least now I know exactly what the problem is.

What Django Can (and Can’t) Do for You

I’m joining a team at work for the next few weeks to hammer out what will be my second significant Django project.

I’m not an expert on Django, but I have enough experience with it now to say that it facilitates the vast majority of web application programming tasks with very little pain. It’s highly opinionated and very complex, and has all the issues that come with that, but if you learn its philosophy, it serves you extremely well. And in cases where it doesn’t—say, with database queries that can’t be written easily with the Django ORM—you can selectively bypass parts of the framework and just do it yourself.

So I’ve been puzzled by complaints I’ve been hearing about how difficult it is to work with Django. There’s an initial learning curve, sure, but I didn’t think it was THAT bad. Yet over and over again, I kept hearing the grumbling, “why do I have to do it this way?”

A recent example came up with the way that Django does model inheritance. There’s a few ways to do it, with important differences in how the database tables are organized. You have to understand this in order to make a good choice, so of course, it takes a little time to research.

Having worked with Java’s Hibernate, I recognized some of the similarities in Django’s approach to addressing the fundamental problem of the impedance mismatch between object classes and database tables. Every ORM must deal with this, and there are nicer and less nice ways to deal with it. The point is, there’s no way to avoid it.

I realized that the complaints weren’t actually about Django per se, despite appearances. They were complaints about not understanding the underlying technologies. People were expecting Django to replace the need for knowledge about databases, HTTP, HTML, MVC architecture, etc. It doesn’t. That’s a poor way to approach what Django can do for you.

The metaphor of tools is useful here. If you gave me a set of sophisticated, high-quality tools to build a house, but I didn’t know the first thing about construction, I might complain that the tools suck because I can’t easily accomplish what I want, because I’m forced to use them in (what seems to me to be) clumsy ways. But that would be terribly misguided.

So the complaints weren’t about the merits of Django compared to how other frameworks do things. What they’re really saying is, “This is different from how I would do it, if I were writing a web framework from scratch.” Which is funny, because I’m not convinced they could invent a better wheel, given their limited knowledge and experience. (This is not a dig: I’ve worked on quite a few projects, many with custom frameworks, and doubt I could conceive of something easier to use and still as powerful as Django. Designing frameworks is hard.) Sometimes the complaints are thinly veiled anti-framework rants, which is fine, I suppose, if you prefer the good old days of 1998. But God help you if you try to create anything really complicated or useful.

Goodbye Ubuntu, Hello Debian Testing

This past weekend, I finally made the switch: I replaced Ubuntu with Debian testing on my main computer.

I really dislike the direction that Ubuntu has been taking lately. Don’t get me wrong: from a technical standpoint, Ubuntu is a great distro, the first and only Linux I’ve used where every single thing Just Worked after installation (I’ve run Slackware and Debian in the past, and maybe one or two others I can’t remember just now). I liked that its releases did a good job of including very recent versions of software. Without a doubt, Ubuntu has done a LOT to put Linux within reach of a wider user base.

But it’s come at a cost. Ubuntu 12.04, which is what I used to run, has spyware. (Here’s a good page with instructions on how to remove it, as well as make other tweaks.) Even if you like Unity, it’s a huge resource hog. And it annoyed me the way Ubuntu’s app store was so similar to the package manager: it seemed designed to lure people into the app store unnecessarily. The shopping results in Dash and privacy concerns were the straws that broke the camel’s back.

I get that Canonical is a business whose ultimate goal is to make money. I wonder if a subscription fee model would have worked for them. I would have gladly paid a reasonable amount to get a quality, user-friendly, up-to-date distro.

So yeah, I’m now running Debian testing on my Toshiba Portege R835 laptop. I chose Debian testing mostly because a lot of packages in stable are a bit too old for my tastes. stable is a great choice for the server, but for my everyday machine, I wanted the latest and greatest, or the closest thing to it that’s still fairly dependable. Debian testing fit the bill.

The install process is not as easy as Ubuntu, but it was fairly painless and seems much improved from years ago. A few notes on what I did:

  • Since I wanted “testing”, I used the latest daily snapshot of the Debian Installer.
  • On the first screen, I chose the advanced options to selected Xfce as my desktop, so I wouldn’t have to uninstall gnome later and install Xfce manually.
  • When the install process finished and I rebooted, my wireless didn’t work. The wireless device in my laptop is a “Intel(R) Centrino(R) Wireless-N”, which requires an additional package with firmware to be installed. Run “apt-get install firmware-iwlwifi” as root to get it, and reboot.
  • I changed my /etc/apt/sources.list file to use “testing” instead of “jessie” so that I would always be tracking the rolling testing release.
  • Getting Flash to work in the browser requires adding the “nonfree” section to the apt sources, and installing the “flashplugin-nonfree” package.

That’s it! Suspending my laptop works just fine, and connecting usb drives and devices works without any additional setup (which was not the case the last time I used Debian many years ago!). So far, all my applications have been working seamlessly with the old data I copied over.

I like having the peace of mind that Debian would never install spyware or intentionally compromise users’ privacy. Yes, it was just a bit more work to install, and getting non-free software that I unfortunately need to use for work is a bit of a hassle, and there will probably be small configuration annoyances in the future that make it less “magical” than Ubuntu. But I’m willing to deal with that.

I hope to replace Ubuntu with Debian testing on my desktop machine at work too sometime in the next few weeks. So long, Ubuntu, it’s been nice.

Goodbye, Sublime Text?

When one of my coworkers started using Sublime Text about a year ago, I was intrigued. I played with it and found it to be a very featureful and speedy editor. I wasn’t compelled enough to make the switch from Emacs, though. (You’ll pry it from my cold dead hands!) But I really liked the fact that you could write plugins for it in Python.

So for fun, I gradually ported my emacs library, which integrates with a bunch of custom development tools at work, to Sublime Text. It works very well, and the ST users in the office have been happy with it. Although I don’t actually use ST regularly, I’ve since been following news about its development.

What I discovered is that many of its users are unhappy with the price tag and dissatisfied with the support they received via the forums. So much so, in fact, that there’s now an attempt to create an open source clone by reverse engineering it. The project is named lime.

I learned about this with very mixed feelings. There’s a good chance the project will take off, given how much frustration exists with ST. Of course, the trend is nothing new: open source software has been supplanting closed source commercial software for a long time now. But this isn’t Microsoft or Oracle we’re talking about; it’s a very small company, charging what I think is a reasonable amount of money for their product. While they undoubtedly could do more to make their users happier, I imagine that they probably can’t do so without hurting what I imagine are pretty slim profit margins. That, or not sleeping ever again.

It’s not news that making a software product is much less viable than it used to be. Where money is made, it’s increasingly through consulting and customization, but one wonders about the size of that market.

It’s generally a good thing that open source has “socialized” software development: technology has enabled communities of programmers to contribute and collaborate on a large scale, in a highly distributed fashion, to create good quality software available to all, taking it out of the profit equation. The problem is that the rest of the economy hasn’t caught up with this new kind of economics.

I don’t mean to sound dramatic: there are many jobs out there for programmers, of course. But it saddens me that if you want to try to create a product to sell, it’s simply not enough to have a good idea anymore, in this day and age. It has to be dirt cheap or free, you have to respond to every message immediately, and respect every single feature request. Between the open source world and the big software companies that service corporate customers, there is a vast middle ground of small companies that is quickly vanishing.

Making Emacs an IDE

It’s that time when bloggers wax introspective about the past year. For me, the major personal revelation in 2011 was re-discovering something very old, and putting it to new use. For me, 2011 was the year of the Emacs IDE.

I’ve been using Emacs, on and off, for close to a decade now. What’s changed is that, in the past few months, I’ve been writing extensions for it. It started with a simple desire to better navigate files in a complex directory hierarchy that followed specific and somewhat convoluted conventions. At first, learning Emacs Lisp was simply a means to an end, but I ended up liking it so much that I started exploring Common Lisp (and more recently, Clojure, since I’ve worked with Java in the past).

What started as a small task has become a larger project of turning Emacs into an IDE.

To understand this, one needs to know some context about the system I work with. We developers edit tons of XML files and files in other text formats, which all drive our proprietary web application product. We have many command line tools that manipulate these files in various ways; the system was originally designed by folks who followed the UNIX philosophy of building orthogonal tools and chaining them together.

There are pros and cons to this system; for reasons I won’t get into, I don’t love it, but it’s what we work with right now. When I started the job, the vast majority of the developers used screen, vi, and the shell prompt. Typical workflows that involved working with only a few files could be extremely hard to keep track of, and usually required a lot of copying and pasting between screen and/or ssh sessions. Few people seemed to mind, but I found the workflow to contain too much extraneous cognitive load, and the state of the tools made development very prone to error.

Gradually, I’ve been integrating our tools into Emacs. Sometimes that simply means binding a key combination to running a diagnostics program and storing the output in a buffer. Sometimes it means accumulating that output for history’s sake. Sometimes it means parsing program output, processing it in Emacs Lisp, and inserting a result into the current buffer. Sometimes it means running external programs, even GUI applications, and tweaking them a bit to tell Emacs to open specific files you want to look at.

The productivity gains have been amazing. This is no reason to brag: managing several screen sessions with different vi and shell instances wasn’t exactly hard to improve upon. But Emacs made it fairly painless. Emacs Lisp has proved to be wonderful “glue” for integrating existing tools in our environment.

Writing tools that enable you to do other job tasks better is a really interesting experience; I’ve never done it to such an extensive degree. So far, one other person in my group is using this Emacs IDE, and she has been happy with how much it facilitates her work. Others who swing by my desk for something often watch me work for a moment, and ask, “how did you do that?! that’s not vi, is it?”

Getting more people to switch over means convincing them that the steep learning curve of Emacs is worth the gains of the integration I’ve done. I’m not sure how much that will happen, since a big part of it is cultural. But if there aren’t any more converts, I don’t really care. The best thing about this ongoing project is that I am the end user. The software I wrote is really for myself. It is something I use intensively every single day. And that makes it all the more gratifying.