Adventures in Docker

large_h-trans

After you’ve taken the time to puzzle through what it is exactly, Docker is nothing short of life changing.

In a nutshell, Docker lets you run programs in separate “containers”, isolating the dependencies each one requires. This is similar to virtualization solutions like VMWare and Virtualbox but Docker is a much more fine grained, customizable tool that operates at a different level.

It took me a week of experimentation to develop a firm grasp of the Docker concepts and how all its pieces work together in practice. I’m now using it at work for development, and I hope to be setting up a configuration for staging soon.

This is a short write-up of what I’ve learned so far.

The Concepts

On a first glance, almost everyone (including me) mistakes Docker as another virtualization technology. It’s not. Virtualization lets you run a machine within a machine. A Docker container is more subtle: it segments or isolates part of your existing Linux operating system.

A container consists of a separate memory space and filesystem (taken from something called an image). A container actually uses the same kernel as your “host” system. Some fancy Linux kernel technologies allow all of this to happen; there is no hardware virtualization going on.

You start using Docker by creating an image or using an existing one made by someone else. An image is a filesystem snapshot. You can build images in an automated fashion using a Dockerfile, which allows you to “script” the running of commands to do things like install software and copy files around.

When you launch a container, Docker makes a “copy” of an image (well, not really, but we’ll pretend for now) and uses that to start a process. The fact that the filesystem is a “copy” is important: if you launch 2 containers using the same image, and the processes in each modify files, those changes happen in each CONTAINER’s filesystem; the image doesn’t change. Processes inside containers only see what’s in the container, so they are isolated from one another. This allows complete dependency separation, at the level of processes.

You can do a lot with containers. You can run multiple processes in them (though this is discouraged). You can start another process in an already running container, so it can interact with the already running process. After a container has stopped (by halting the process running in it), you can start it back up again. Again, any file changes made are in the container’s filesystem; the image remains unchanged.

Issues

There are three issues I’ve personally encountered so far in using Docker:

1) Persistent Storage

Containers are meant to be transient. In a well designed setup, you should, theoretically, be able to spin up new containers and discard old ones all the time, and not lose any data. This means that your persistent storage has to live somewhere else, in one of two places: a special “data container” or a directory mounted from the host filesystem.

Data containers were too complicated and weird, and I couldn’t get them to work the way I expected, so I mounted directories instead. This has the nice side effect that, as Dockerized processes change files, you can see those changes immediately from the host without having to do anything special to access them. I’m not sure, however, what “best practices” are concerning storage.

2) Multi-Container Applications

Many modern applications consist of several processes or require other applications. For example, my project at work consists of a Rails web app, a delayed_job worker process, an Apache Solr instance, and a MySQL database.

Since Docker strongly recommends a one-process-per-container configuration, you need a way to coordinate a set of running processes and make sure they can communicate with one another. Docker Compose does this, allowing you to easily specify whether containers should be able to open connections to each other’s network ports, share filesystems, etc.

Currently, Docker Compose is not yet considered “production ready.” While it addresses the need to orchestrate processes, there is also the problem of monitoring and restarting processes as needed. It’s not clear to me yet what the best tool is for doing this (it may even be a completely orthogonal concern to what Docker does).

3) Running Commands

Sometimes you need to use the Rails CLI to do things like run database migrations or a rake task. Running commands takes a bit of extra effort, since they need to happen in a container. A slight complication is whether to run the command in the existing Rails container or to start another one entirely for that new process. It’s a bit of typing, which is annoying.

The Payoffs

How is Docker life changing? There are several scenarios I encounter ALL THE TIME that Docker helps tremendously with.

* On the same machine, you can run several web applications, which each require different versions of, say, Ruby, Rails, MySQL and Apache, without dealing with software conflicts and incompatibilities.

* Related to the previous point, Docker lets you more easily experiment with different software without polluting your base system with a lot of crap you might never use again.

* There is MUCH less overhead and less wasted memory usage than with virtualization. If you allocate 1GB of RAM to a virtual machine but only use 512MB, the other half goes to waste. Docker containers only use as much memory as the processes themselves take up, plus a small bit of overhead. Since Docker uses unionfs to “copy” (really, overlay) images to container filesystems, the actual disk space used isn’t as much as you might think.

* Since Docker containers are entirely self-contained, they can be deployed in development, staging, and production environments with almost NO differences among them, thereby minimizing the problems that usually arise.

For me, a lot of the benefits boil down to this: virtualization is amazing, but in practice, I don’t use it that much because it’s too heavyweight and too coarse for my needs. A Virtualbox is a whole other machine that I need to think about. By working at the level of Linux processes, Docker is exactly the right kind of tool for managing application dependencies.

A cautionary note: there’s a lot of buzz right now around containers, including efforts at defining vendor-neutral standards, such as appc. Although Docker releases have been rapid and it is already seeing a lot of adoption, it feels bleeding edge to me. It’s exciting but in a few years, it’s entirely possible that another container solution might surpass it to become the de facto standard. The playing field is just too new, which means Docker comes with some risk. But it’s well worth exploring at this early stage, even if only to get a taste of the new ideas shaping systems configuration and deployment that are definitely here to stay.

A VIAF Reconciliation Service for OpenRefine

open-refine

OpenRefine is a wonderful tool my coworkers have been using to clean data for my project at work. Our workflow has been nice and simple: they take a CSV dump from a database, transform the data in OpenRefine, and export it as CSV. I write scripts to detect the changes and update the database with the new data.

We have a need, in the next few months, to reconcile the names of various individuals and organizations with standard “universal” identifiers for them in the Virtual International Authority File. The tricky part is that any given name in our system might have several candidates in VIAF, so it can’t be a fully automated process. A human being needs to look at them and make a decision. OpenRefine allows you to do this reconciliation, and also provides an interface that lets you choose among candidates.

Communicating with VIAF is not built in, though. Roderic D. M. Page wrote a VIAF reconciliation service, and it’s publicly accessible at the address listed on the linked page (the PHP source code is available here). It works very nicely.

I wanted to write my own version for 2 reasons: 1) I needed it to support the different name types in VIAF, 2) I wanted to host it myself, in case I needed to make large numbers of queries, so as not to be an obnoxious burden on Page’s server.

The project is called refine_viaf and the source code is available at https://github.com/codeforkjeff/refine_viaf.

For those who just want to use it without hosting their own installation, I’ve also made the service publicly accessible at http://refine.codefork.com, where there are instructions on how to configure OpenRefine to use it.

What Django Can (and Can’t) Do for You

I’m joining a team at work for the next few weeks to hammer out what will be my second significant Django project.

I’m not an expert on Django, but I have enough experience with it now to say that it facilitates the vast majority of web application programming tasks with very little pain. It’s highly opinionated and very complex, and has all the issues that come with that, but if you learn its philosophy, it serves you extremely well. And in cases where it doesn’t—say, with database queries that can’t be written easily with the Django ORM—you can selectively bypass parts of the framework and just do it yourself.

So I’ve been puzzled by complaints I’ve been hearing about how difficult it is to work with Django. There’s an initial learning curve, sure, but I didn’t think it was THAT bad. Yet over and over again, I kept hearing the grumbling, “why do I have to do it this way?”

A recent example came up with the way that Django does model inheritance. There’s a few ways to do it, with important differences in how the database tables are organized. You have to understand this in order to make a good choice, so of course, it takes a little time to research.

Having worked with Java’s Hibernate, I recognized some of the similarities in Django’s approach to addressing the fundamental problem of the impedance mismatch between object classes and database tables. Every ORM must deal with this, and there are nicer and less nice ways to deal with it. The point is, there’s no way to avoid it.

I realized that the complaints weren’t actually about Django per se, despite appearances. They were complaints about not understanding the underlying technologies. People were expecting Django to replace the need for knowledge about databases, HTTP, HTML, MVC architecture, etc. It doesn’t. That’s a poor way to approach what Django can do for you.

The metaphor of tools is useful here. If you gave me a set of sophisticated, high-quality tools to build a house, but I didn’t know the first thing about construction, I might complain that the tools suck because I can’t easily accomplish what I want, because I’m forced to use them in (what seems to me to be) clumsy ways. But that would be terribly misguided.

So the complaints weren’t about the merits of Django compared to how other frameworks do things. What they’re really saying is, “This is different from how I would do it, if I were writing a web framework from scratch.” Which is funny, because I’m not convinced they could invent a better wheel, given their limited knowledge and experience. (This is not a dig: I’ve worked on quite a few projects, many with custom frameworks, and doubt I could conceive of something easier to use and still as powerful as Django. Designing frameworks is hard.) Sometimes the complaints are thinly veiled anti-framework rants, which is fine, I suppose, if you prefer the good old days of 1998. But God help you if you try to create anything really complicated or useful.

Goodbye, Sublime Text?

When one of my coworkers started using Sublime Text about a year ago, I was intrigued. I played with it and found it to be a very featureful and speedy editor. I wasn’t compelled enough to make the switch from Emacs, though. (You’ll pry it from my cold dead hands!) But I really liked the fact that you could write plugins for it in Python.

So for fun, I gradually ported my emacs library, which integrates with a bunch of custom development tools at work, to Sublime Text. It works very well, and the ST users in the office have been happy with it. Although I don’t actually use ST regularly, I’ve since been following news about its development.

What I discovered is that many of its users are unhappy with the price tag and dissatisfied with the support they received via the forums. So much so, in fact, that there’s now an attempt to create an open source clone by reverse engineering it. The project is named lime.

I learned about this with very mixed feelings. There’s a good chance the project will take off, given how much frustration exists with ST. Of course, the trend is nothing new: open source software has been supplanting closed source commercial software for a long time now. But this isn’t Microsoft or Oracle we’re talking about; it’s a very small company, charging what I think is a reasonable amount of money for their product. While they undoubtedly could do more to make their users happier, I imagine that they probably can’t do so without hurting what I imagine are pretty slim profit margins. That, or not sleeping ever again.

It’s not news that making a software product is much less viable than it used to be. Where money is made, it’s increasingly through consulting and customization, but one wonders about the size of that market.

It’s generally a good thing that open source has “socialized” software development: technology has enabled communities of programmers to contribute and collaborate on a large scale, in a highly distributed fashion, to create good quality software available to all, taking it out of the profit equation. The problem is that the rest of the economy hasn’t caught up with this new kind of economics.

I don’t mean to sound dramatic: there are many jobs out there for programmers, of course. But it saddens me that if you want to try to create a product to sell, it’s simply not enough to have a good idea anymore, in this day and age. It has to be dirt cheap or free, you have to respond to every message immediately, and respect every single feature request. Between the open source world and the big software companies that service corporate customers, there is a vast middle ground of small companies that is quickly vanishing.

Exercise Mania

My ex-girlfriend’s mom started doing a jazzercise class in the 80s and hasn’t stopped since. It’s a weird thing.

Programming exercises can be just as addictive as some forms of well-marketed physical exercise. I have to actively resist writing solutions in Lisp to every silly problem I stumble upon online. But this morning, I came across this simple one (a blog post from 2007, but that happens on Hacker News) and, for some reason, couldn’t pass it up:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

It’s so silly that I guess some folks, like the blogger linked above, have turned it into a problem of “what’s the most creative/strange way I can think of to do this?” Which is interesting and fun in its own right.

What I don’t understand is the attitude expressed in the original fizzbuzz blog post, “Using FizzBuzz to Find Developers who Grok Coding”. Does it really matter if it takes someone 5 or 20 minutes to write a solution? (FWIW, I took about 10 mins). A shorter time with such a goofy example means someone “groks” coding more than another? Ridiculous.

These types of puzzles are amusing precisely because a lot of real-world development (which is what the vast majority of people who code for a living do) is unfortunately pretty rote, and doesn’t require a lot of algorithmic thinking. Code monkeys learn to use libraries and frameworks, and spend a lot of time trying to use them correctly and cleanly, in order to implement straightforward requirements. So these exercises are a shift in mindset. In a tiny way, they put you back on the path of engineering. It’s why people find them interesting to do. It’s why I’m reading the Structure and Interpretation of Computer Programs.

Taking a few more minutes might mean you’re not accustomed to solving such problems every minute of your working life. But that’s not at all the same as gauging whether someone understands coding or not. That kind of measurement is naive at best.