Mar 10

Using MongoDB and a Relational Database Together

Below is the presentation I gave to the Austin MongoDB Day on 3/27/2010. I talked about the design decisions one needs to make when determining whether to introduce a document database like MongoDB into an existing application built on a relational database. I give a few examples of how we’ve made MongoDB and our MySQL database live together in harmony inside CheapTweet.

Thanks to Lynn Bender of GeekAustin, CoSpace, and the folks at 10Gen for putting on a great event.

Sep 08

A recent Rubyist goes to the Lone Star Ruby Conference

I attended the Lone Star Ruby Conference (LSRC), held here in Austin over the past two days. As this was my first conference on the topic, I thought I would share some observations. First, a vigorous caveat emptor… I’m a former Java guy still relatively new to the Ruby world. At the very least I hope to provide the humble perspective of a transplant from the crowded streets of Javaopolis to the rolling hills of Ruby country. So here goes…

Rubyists aren’t afraid of change (and some churn)

I’ve long been an interested outsider to Ruby and Rails – following along on the fringes for a couple years now by reading articles, running irb occasionally and generating some scaffolds. At that distance, you basically see Ruby and Rails. What you don’t see is the constant churn of open source projects, plugins, development tools, GUI toolkits, ideologies and best practices. Gregg Pollack and Jason Seifer’s talk on innovations in the last year and Evan Phoenix‘s keynote about Ruby memes made the currents of change pretty clear. Java experienced a lot of change during my 8 years but it definitely seemed to be at a more staid pace. There were a few bigger, more corporate, entities driving it (Sun, Apache, IBM) and fewer mythologized personalities like why. In the Ruby community, a lot of energy seems to be expended reinventing various wheels (as Evan pointed out with his list of ARGV processors). There’s a fine distinction between a healthy competition of ideas and a naval-gazing, ego-stroking churn that wastes time. That said, I would never stand in the way of the free market of ideas. So, by all means, (with no apologies to Mao) let 1000 ARGV processors bloom and let’s see which stick.

There are some surprising technology holes – but they’re being filled

As a former Java programmer who worked deeply with concurrency, Ruby’s after-thought of a concurrency model frightened me a bit. Java, though described by Dave Thomas on the last night’s panel as “a blunt tool”, has a basement filled with extremely sharp swords known as threads and a well-defined concurrency model. But, you say, Ruby has Threads. Yes, it does, but it’s a shadow of what’s available in Java. There, you can wield the sharp tools of concurrency with great effect on difficult problems but you can certainly stab yourself without understanding memory consistency effects, atomicity concerns and everything else.

Until recently, it seems the Rubyist approach to most of these issues has been to ignore them by slapping in a Big Fat Mutex. As someone who’s dealt with connection thread pools for RDBMs access for a long time, I was surprised to learn how innovative the non-blocking MySQL connector being worked on by espace as part of NeverBlock was considered. Fibers, mentioned by both Matz and Bruce Williams in discussing Ruby 1.9, offer a light-weight cooperative approach to something resembling concurrency. It was also heartening to hear Matz talk about how scalability is a big concern for the future of Ruby. Of course, no mention of these issues in Ruby is complete without noting that Ruby (and Rails) has been able to achieve a great deal in this area by using processes. This is usually a pretty simple and straightforward approach but it’s fairly large-grained and could certainly stand to be augmented with more sophisticated fine-grained capabilities that live inside the Interpreter.

Joy matters

Despite the above grumbling about concurrency, Ruby as a language is a beautiful thing. It really is damn pleasant to write Ruby code. I think this is a direct extension of Matz’s personality and concerns. In his keynote, he spoke powerfully on the need to see a prgramming language as part of the human interface to computers and the need to make that interface as joyful as possible to use. I don’t think I’ve once read “Joy” as a chief requirement in a JSR submitted to the Java Community Process for inclusion into Sun’s Java spec. That alone is probably enough reason to switch to Ruby. I do wonder how Matz’s vision of a humane language will hold up under the predicted onslaught of mindless Java-drones such as me that will create the 3 million new Ruby programmers expected over the next few years. As for that…

Don’t fear the hordes

We’re not all mindless and we’re not all drones. The unspoken sentiment when talking about the future growth of Ruby was “things are ok now because we’re all smart and dedicated craftsmen, unlike people who are using Java but will eventually invade beautiful ruby-land and make it ‘enterprisey'”. Growth is stressful for any group, especially one that gleefully defines itself as a minority in opposition to the mainstream as I believe Rubyists do. However, I think it benefits no one to assume that new Rubyists who may come to it later than you did are any less smart, dedicated or concerned about their work than you are. A community that doesn’t grow is bound to become introverted and ultimately stagnant. It would be a great loss if the Ruby community did that. The good news is I don’t think it will. Personally, I’ve felt welcomed by the Austin Rails community and I’ve also found plenty of resources elsewhere to help me learn. The best thing thing for Ruby’s future is to continue to embrace the hordes and show them what they’ve been missing.

And, In conclusion…

There are an incredible number of smart people that truly seem to love Ruby and want to see it succeed. Though there are ideologies, squabbles and disagreements, the part of the community assembled at LSRC was as close to a meritocracy as I’ve personally seen. The power in Ruby seems to lie with those that have most demonstrated their technical abilities and not with those that have the right corporate affiliations. This focus on merit leads to the sometimes duplicative and anarchic approach to developing new technology as people compete with their ideas. Ultimately, though, this is probably Ruby’s greatest strength and will serve it well in the future. I’m looking forward to being a part of it.

By the way, thanks to the Lone Star Ruby Foundation for putting on the conference. Great job. See you all at LSRC 2009.

Jul 08

The web is still the web

Neil McAllister at Fatal Exception, inspired by the recent announcement that some flash data will be exposed to search engines asks the very intriguing question, “Is the Web still the Web?” The reason for asking is the proliferation of Rich Internet Application (RIA) technologies such as the aforementioned Flash, Silverlight, Google Web Toolkit, and AJAX (sort of). As background, he invokes a history in which Tim Berners-Lee granted us simple text-only documents encoded in HTML. This is, apparently, The Way The Web Is Supposed To Be. He then draws the distinction between RIAs and HTML and asks:

Is it still the Web if it’s not really hypertext? Is it still the Web if you can’t navigate directly to specific content? Is it still the Web if the content can’t be indexed and searched? Is it still the Web if you can only view the application on certain clients or devices? Is it still the Web if you can’t view source?

My answer on all these counts: “Yes”. I’m pretty sure you could replace the term “RIAs” with “images” or “videos” in his argument at various points during the evolution of the web from nicely marked up physics documents all the way to YouTube. Point being that HTTP (as one of the key technologies which underpins the web) only asks that we be able to reference a resource via a URI but makes no claims to the representation of that resource. It’s a testament to the foresight of the original designers of web technologies that HTTP describes only how we locate, modify and de-reference resources and doesn’t come with a dependency on representing those resources in HTML. Neil seems to confuse “resource” with “HTML document”. They need not be the same thing. That would be poor design.

Text-based indexing and search as well as “view source” are (incredibly useful) byproducts of the fact that so many of the resources on the web are represented as HTML. Though it’s hard to remember a time before Google roamed the earth, it hasn’t been that long ago that text-based indexing and search didn’t really work either. In time these other representations of resources will be mined, indexed and made searchable. There’s a lot of money and a lot of smart people trying to make that happen.

As for whether or not it’s still the web if you can only view it on certain clients? Well, as anyone who’s ever tried to develop a standards compliant site that also works in IE6 can attest, even relatively simple HTML web resources have client-specific dependencies. As today’s limited devices get more powerful and as browsers (hopefully) converge towards a reasonable baseline of standards these issues, too, shall pass.

This leaves the hypertext question. The reason we call it the “web” is due to the web-like nature of the links going from one resource to another. HTML does a fantastic job of providing this web of links (the hyptertext) with that simple <a> tag we know and love. If these new technologies don’t encourage connections between resources then they’re not contributing to the “web-ness” of the web. There are two parts to this: linking to other resources and allowing themselves to be linked to. Just because they’re not HTML doesn’t mean you can’t do these things. You can create links to other resources with these technologies and you can create URIs that can point to resources “within” a resource represented by these technologies. That’s not to say you can’t create a Flash site with no outgoing links and no URIs to hook into for incoming links. Of course you can just as easily create a dead-end HTML page with no anchors.

So yes, in my opinion, the web is still the web. Because of the great separation of concerns in the design of the web’s technologies, people have been able to extend it far beyond the original vision as a document sharing mechanism. It’s the greatest platform for experimentation in all the ways we can connect and deliver information yet conceived. Because of this, there will always be innovations that push the boundaries of how we’ve experienced it in the past. RIAs are just another part of the web and its continued evolution.

Jun 08

Telling semantic lies

Inspired by conversations with some smart people at a recent Semantic Web Austin event, I’ve undertaken to restart my education on semantic web technologies like RDF, RDFa, Microformats, etc. When I wear my web developer hat, I’m definitely an advocate of clean semantic markup that correctly describes the structure of the data on the page. These technologies take that approach further (in some cases much, much further). In general, that seems like an unquestionably good idea. More semantic structure means more data portability and data discovery and therefore a more powerful web. It’s probably even a necessary step towards a WebOS.

However, in my limited research to this point, it seems there’s an elephant in the room in all this advocacy. Inevitably discussions of semantic technologies include “better search” as a chief raison d’etre for their use. We’ll have search engines that “understand” the machine readable data on our pages or RDF descriptions which can then draw logical inferences from the relationships among the universe of web resources. But, what if the semantic data is incorrect or just downright dishonest? Over-reliance on easily spammed meta tags gave us garbage in and garbage out in Altavista and Excite back in the 90s. It would be trivial to take my RDFa structured blog post, move it to a spam blog, find the semantically marked-up creator element, change it to someone else and republish. Poof! My finely crafted blog post on the semantic web is now selling ads for herbal remedies to unsuspecting web users with poor search skills. Of course, it’s also easy to just out and out lie when describing content. Maybe I’m not really Angelina Jolie‘s spouse or Bill Gates‘ neighbor even though I swear I am in my XFN standard rel attributes.

I would imagine that one thing that sets these approaches apart from 90s meta tags is the fact that many of these are used to specify relationships between resources which must be symmetric. Angelina’s resource dereferenced from her URI must indicate that I’m her spouse as well for that XFN relationship to be “believed” by a semantic web search that understands XFN. (How Angelina or any of us feel about being boiled down to an authoritative web resource identified by a URI is another issue.) Of course some people will try to game any system but I’m sure the vast majority of web users (or publishing tools) will include this structured data for legitimate purposes. But all this does make me wonder how much search engines will ultimately be able to rely on semantic data for drawing the intelligent inferences we hope to see from them. Can any of you out there that know more about these technologies help me better understand how we can ensure semantic data isn’t telling lies? If so, leave a comment; I’d love to know more.

Jun 08

Running Hadoop on Windows

What is Hadoop?

Hadoop is a an open source Apache project written in Java and designed to provide users with two things: a distributed file system (HDFS) and a method for distributed computation. It’s based on Google’s published Google File System and MapReduce concept which discuss how to build a framework capable of executing intensive computations across tons of computers. Something that might, you know, be helpful in building a giant search index. Read the Hadoop project description and wiki for more information and background on Hadoop.

What’s the big deal about running it on Windows?

Looking for Linux? If you’re looking for a comprehensive guide to getting Hadoop running on Linux, please check out Michael Noll’s excellent guides: Running Hadoop on Ubuntu Linux (Single Node Cluster) and Running Hadoop on Ubuntu Linux (Multi-Node Cluster). This post was inspired by these very informative articles.

Hadoop’s key design goal is to provide storage and computation on lots of homogenous “commodity” machines; usually a fairly beefy machine running Linux. With that goal in mind, the Hadoop team has logically focused on Linux platforms in their development and documentation. Their Quickstart even includes the caveat that “Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so this is not a production platform.” If you want to use Windows to run Hadoop in pseudo-distributed or distributed mode (more on these modes in a moment), you’re pretty much left on your own. Now, most people will still probably not run Hadoop in production on Windows machines, but the ability to deploy on the most widely used platform in the world is still probably a good idea for allowing Hadoop to be used by many of the developers out there that use Windows on a daily basis.

Caveat Emptor

I’m one of the few that has invested the time to setup an actual distributed Hadoop installation on Windows. I’ve used it for some successful development tests. I have not used this in production. Also, although I can get around in a Linux/Unix environment, I’m no expert so some of the advice below may not be the correct way to configure things. I’m also no security expert. If any of you out there have corrections or advice for me, please let me know in a comment and I’ll get it fixed.

This guide uses Hadoop v0.17 and assumes that you don’t have any previous Hadoop installation. I’ve also done my primary work with Hadoop on Windows XP. Where I’m aware of differences between XP and Vista, I’ve tried to note them. Please comment if something I’ve written is not appropriate for Vista.

Bottom line: your mileage may vary, but this guide should get you started running Hadoop on Windows.

A quick note on distributed Hadoop

Hadoop runs in one of three modes:

  • Standalone: All Hadoop functionality runs in one Java process. This works “out of the box” and is trivial to use on any platform, Windows included.
  • Pseudo-Distributed: Hadoop functionality all runs on the local machine but the various components will run as separate processes. This is much more like “real” Hadoop and does require some configuration as well as SSH. It does not, however, permit distributed storage or processing across multiple machines.
  • Fully Distributed: Hadoop functionality is distributed across a “cluster” of machines. Each machine participates in somewhat different (and occasionally overlapping) roles. This allows multiple machines to contribute processing power and storage to the cluster.

The Hadoop Quickstart can get you started on Standalone mode and Psuedo-Distributed (to some degree). Take a look at that if you’re not ready for Fully Distributed. This guide focuses on the Fully Distributed mode of Hadoop. After all, it’s the most interesting where you’re actually doing real distributed computing.



I’m assuming if you’re interested in running Hadoop that you’re familiar with Java programming and have Java installed on all the machines on which you want to run Hadoop. The Hadoop docs recommend Java 6 and require at least Java 5. Whichever you choose, you need to make sure that you have the same major Java version (5 or 6) installed on each machine. Also, any code you write for running using Hadoop’s MapReduce must be compiled with the version you choose. If you don’t have Java installed, go get it from Sun and install it. I will assume you’re using Java 6 in the rest of this guide.


As I said in the introduction, Hadoop assumes Linux (or a Unix flavor OS) is being used to run Hadoop. This assumption is buried pretty deeply. Various parts of Hadoop are executed using shell scripts that will only work on a Linux shell. It also uses passwordless secure shell (SSH) to communicate between computers in the Hadoop cluster. The best way to do these things on Windows is to make Windows act more like Linux. You can do this using Cygwin, which provides a “Linux-like environment for Windows” that allows you to use Linux-style command line utilities as well as run really useful Linux-centric software like OpenSSH. Go download the latest version of Cygwin. Don’t install it yet. I’ll describe how you need to install it below.


Go download Hadoop core. I’m writing this guide for version 0.17 and I will assume that’s what you’re using.

More than one Windows PC on a LAN

It should probably go without saying that to follow this guide, you’ll need to have more than one PC. I’m going to assume you have two computers and that they’re both on your LAN. Go ahead and designate one to be the Master and one to be the Slave. These machines together will be your “cluster”. The Master will be responsible for ensuring the Slaves have work to do (such as storing data or running MapReduce jobs). The Master can also do its share of this work as well. If you have more than two PCs, you can always setup Slave2, Slave3 and so on. Some of the steps below will need to be performed on all your cluster machines, some on just Master or Slaves. I’ll note which apply for each step.

Step 1: Configure your hosts file (All machines)

This step isn’t strictly necessary but it will make your life easier down the road if your computers change IPs. It’ll also help you keep things straight in your head as you edit configuration files. Open your Windows hosts file located at c:\windows\system32\drivers\etc\hosts (the file is named “hosts” with no extension) in a text editor and add the following lines (replacing the NNNs with the IP addresses of both master and slave):


Save the file.

Step 2: Install Cygwin and Configure OpenSSH sshd (All machines)

Cygwin has a bit of an odd installation process because it lets you pick and choose which libraries of useful Linux-y programs and utilities you want to install. In this case, we’re really installing Cygwin to be able to run shell scripts and OpenSSH. OpenSSH is an implementation of a secure shell (SSH) server (sshd) and client (ssh). If you’re not familiar with SSH, you can think of it as a secure version of telnet. With the ssh command, you can login to another computer running sshd and work with it from the command line. Instead of reinventing the wheel, I’m going to tell you to go here for step-by-step instructions on how to install Cygwin on Windows and get OpenSSH’s sshd server running. You can stop after instruction 6. Like the linked instructions, I’ll assume you’ve installed Cygwin to c:\cygwin though you can install it elsewhere.

If you’re running a firewall on your machine, you’ll need to make sure port 22 is open for incoming SSH connections. As always with firewalls, open your machine up as little as possible. If you’re using Windows firewall, make sure the open port is scoped to your LAN. Microsoft has documentation for how to do all this with Windows Firewall (scroll down to the section titled “Configure Exceptions for Ports”).

Step 3: Configure SSH (All Machines)

Hadoop uses SSH to allow the master computer(s) in a cluster to start and stop processes on the slave computers. One of the nice things about SSH is it supports several modes of secure authentication: you can use passwords or you can use public/private keys to connect without passwords (“passwordless”). Hadoop requires that you setup SSH to do the latter. I’m not going to go into great detail on how this all works, but suffice it to say that you’re going to do the following:

  1. Generate a public-private key pair for your user on each cluster machine.
  2. Exchange each machine user’s public key with each other machine user in the cluster.

Generate public/private key pairs

To generate a key pair, open Cygwin and issue the following commands ($> is the command prompt):
$> ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Now, you should be able to SSH into your local machine using the following command:
$> ssh localhost

When prompted for your password, enter it. You’ll see something like the following in your Cygwin terminal.

hayes@localhost's password:
Last login: Sun Jun 8 19:47:14 2008 from localhost

hayes@calculon ~

To quit the SSH session and go back to your regular terminal, use:
$> exit

Make sure to do this on all computers in your cluster.

Exchange public keys

Now that you have public and private key pairs on each machine in your cluster, you need to share your public keys around to permit passwordless login from one machine to the other. Once a machine has a public key, it can safely authenticate a request from a remote machine that is encrypted using the private key that matches that public key.

On the master issue the following command in cygwin (where “<slaveusername>” is the username you use to login to Windows on the slave computer):

$> scp ~/.ssh/id_dsa.pub <slaveusername>@slave:~/.ssh/master-key.pub

Enter your password when prompted. This will copy your public key file in use on the master to the slave.

On the slave, issue the following command in cygwin:

$> cat ~/.ssh/master-key.pub >> ~/.ssh/authorized_keys

This will append your public key to the set of authorized keys the slave accepts for authentication purposes.

Back on the master, test this out by issuing the following command in cygwin:

$> ssh <slaveusername>@slave

If all is well, you should be logged into the slave computer with no password required.

Repeat this process in reverse, copying the slave’s public key to the master. Also, make sure to exchange public keys between the master and any other slaves that may be in your cluster.

Configure SSH to use default usernames (optional)

If all of your cluster machines are using the same username, you can safely skip this step. If not, read on.

Most Hadoop tutorials suggest that you setup a user specific to Hadoop. If you want to do that, you certainly can. Why setup a specific user for Hadoop? Well, in addition to being more secure from a file permissions and security perspective, when Hadoop uses SSH to issue commands from one machine to another it will automatically try to login to the remote machine using the same user as the current machine. If you have different users on different machines, the SSH login performed by Hadoop will fail. However, most of us on Windows typically use our machines with a single user and would probably prefer not to have to setup a new user on each machine just for Hadoop.

The way to allow Hadoop to work with multiple users is by configuring SSH to automatically select the appropriate user when Hadoop issues its SSH command. (You’ll also need to edit the hadoop-env.sh config file, but that comes later in this guide.) You can do this by editing the file named “config” (no extension) located in the same “.ssh” directory where you stored your public and private keys for authentication. Cygwin stores this directory under “c:\cygwin\home\<windowsusername>\.ssh”.

On the master, create a file called config and add the following lines (replacing “<slaveusername>” with the username you’re using on the Slave machine:

Host slave
User <slaveusername>

If you have more slaves in your cluster, add Host and User lines for those as well.

On each slave, create a file called config and add the following lines (replacing “<masterusername>” with the username you’re using on the Master machine:

Host master
User <masterusername>

Now test this out. On the master, go to cygwin and issue the following command:

$> ssh slave

You should be automatically logged into the slave machine with no username and no password required. Make sure to exit out of your ssh session.

For more information on this configuration file’s format and what it does, go here or run man ssh_config in cygwin.

Step 4: Extract Hadoop (All Machines)

If you haven’t downloaded Hadoop 0.17, go do that now. The file will have a “.tar.gz” extension which is not natively understood by Windows. You’ll need something like WinRAR to extract it. (If anyone knows something easier than WinRAR for extracting tarred-gzipped files on Windows, please leave a comment.)

Once you’ve got an extraction utility, extract it directly into c:\cygwin\usr\local. (Assuming you installed Cygwin to c:\cygwin as described above.)

The extracted folder will be named hadoop-0.17.0. Rename it to hadoop. All further steps assume you’re in this hadoop directory and will use relative paths for configuration files and shell scripts.

Step 5: Configure hadoop-env.sh (All Machines)

The conf/hadoop-env.sh file is a shell script that sets up various environment variables that Hadoop needs to run. Open conf/hadoop-env.sh in a text editor. Look for the line that starts with “#export JAVA_HOME”. Change that line to something like the following:

export JAVA_HOME=c:\\Program\ Files\\Java\\jdk1.6.0_06

This should be the home directory of your Java installation. Note that you need to remove the leading “#” (comment) symbol and that you need to escape both backslashes and spaces with a backslash.

Next, locate the line that starts with “#export HADOOP_IDENT_STRING”. Change it to something like the following:


Where MYHADOOP can be anything you want to identify your Hadoop cluster with. Just make sure each machine in your cluster uses the same value.

To test these changes issue the following commands in cygwin:

$> cd /usr/local/hadoop
$> bin/hadoop version

You should see output similar to this:

Hadoop 0.17.0
Subversion http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 656523
Compiled by hadoopqa on Thu May 15 07:22:55 UTC 2008

If you see output like this:

bin/hadoop: line 166: c:\Program Files\Java\jdk1.6.0_05/bin/java: No such file or directory
bin/hadoop: line 251: c:\Program Files\Java\jdk1.6.0_05/bin/java: No such file or directory
bin/hadoop: line 251: exec: c:\Program Files\Java\jdk1.6.0_05/bin/java: cannot execute: No such file or directory

This means that your Java home directory is wrong. Go back and make sure you specified the correct directory and used the appropriate escaping.

Step 6: Configure hadoop-site.xml (All Machines)

The conf/hadoop-site.xml file is basically a properties file that lets you configure all sorts of HDFS and MapReduce parameters on a per-machine basis. I’m not going to go into detail here about what each property does, but there are 3 that you need to configure on all machines: fs.default.name, mapred.job.tracker and dfs.replication. You can just copy the XML below into your conf/hadoop-site.xml file.

<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!– Put site-specific property overrides in this file. –>

For more information about what these configuration properties (and others) do, see the Hadoop cluster setup docs and the hadoop-default.xml documentation.

Step 7: Configure slaves file (Master only)

The conf/slaves file tells the master where it can find slaves to do work. Open yours in a text editor. It will probably have one line which says “localhost”. Replace that with the following:


Step 8: Firewall Configuration (All Machines)

If you’re using Windows Firewall, you will need to ensure that the appropriate ports are open so that the slaves can make HTTP requests for information from the master. (This is different from the port 22 needed for SSH.) The list of ports for which you should make exceptions are as follows: 47110, 47111, 50010, 50030, 50060, 50070, 50075, 50090. These should all be open on the master for request coming from your local network. For more information about these ports, see the Hadoop default configuration file documentation.

You should also make sure that Java applications are allowed by the firewall to connect to the network on all your machines including the slaves.

Step 9: Starting your cluster (Master Only)

To start your cluster, make sure you’re in cygwin on the master and have changed to your hadoop installation directory. To fully start your cluster, you’ll need to start DFS first and then MapReduce.

Starting DFS

Issue the following command:

$> bin/start-dfs.sh

You should see output somewhat like the following (note that I have 2 slaves in my cluster which has a cluster ID of Appozite, your mileage will vary somewhat):

starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-namenode-calculon.out
master: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-datanode-calculon.out
slave: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-datanode-hayes-daviss-macbo
slave2: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-datanode-XTRAPUFFYJR.out
master: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-secondarynamenode

To see if your distributed file system is actually running across multiple machines, you can open the Hadoop DFS web interface which will be running on your master on port 50070. You can probably open it by clicking this link: http://localhost:50070. Below is a screenshot of my cluster. As you can see, there are 3 nodes with a total of 712.27 GB of space. (Click the image to see the larger version.)

Starting MapReduce

To start the MapReduce part of Hadoop, issue the following command:

$> bin/start-mapred.sh

You should see output similar to the following (again noting that I’ve got 3 nodes in my cluster):

starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-jobtracker-calculon.out
master: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-tasktracker-calculon.ou
slave: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-tasktracker-hayes-daviss
slave2: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-tasktracker-XTRAPUFFYJR

You can view your MapReduce setup using the MapReduce monitoring web app that comes with Hadoop which runs on port 50030 of your master node. You can probably open it by clicking this like: http://localhost:50030. Below is a screenshot from my browser. There’s not much exciting to see here until you have an actual MapReduce job running.

Testing it out

Now that you’ve got your Hadoop cluster up and running, executing MapReduce jobs or writing to and reading from DFS are no different on Windows than any other platform so long as you use cygwin to execute commands. At this point, I’ll refer you to Michael Noll’s Hadoop on Ubuntu Linux tutorial for an explanation on how to run a large enough MapReduce job to take advantage of your cluster. (Note that he’s using Hadoop 0.16.0 instead of 0.17.0, so you’ll replace “0.16.0” with “0.17.0” where applicable.) Follow his instructions and you should be good to go. The Hadoop site also offers a MapReduce tutorial to you can get started writing your own jobs in Java. If you’re interested in writing MapReduce jobs in other languages that take advantage of Hadoop, check out the Hadoop Streaming documentation.

How to stop your cluster

When you’re ready to stop your cluster, it’s simple. Just top MapReduce and then DFS.

To stop MapReduce, issue the following command on the master:


You should see output similar to the following:

stopping jobtracker
slave: stopping tasktracker
master: stopping tasktracker
slave2: stopping tasktracker

To stop DFS, issue the following command on the master:

$> bin/stop-dfs.sh

You should see output similar to the following:

stopping namenode
master: stopping datanode
slave: stopping datanode
slave2: stopping datanode
master: stopping secondarynamenode

And that’s it

I hope this helps anyone out there trying to run Hadoop on Windows. If any of you have corrections, questions or suggestions please comment and let me know. Happy Hadooping!

Update 6/18/2008: Fixed link to Hadoop Admin screenshot. Thanks to Robert Towne for pointing out the bad link.

Jun 08

Best geeky sentence I’ve read today

With apologies to Tyler Cohen

Normalization is a kind of ethical system for data.

This is from a great post on the always fascinating High Scalability about how you sometimes just have to let go and de-normalize.

Jun 08

Facebook chat uses Erlang to scale

I started playing with Erlang last year. Mostly that meant reading the Joe Armstrong book, looking at ejabberd and writing a little code. Sadly, I’ve not had the chance to go much beyond the “playing” stage. Anyway, I’ve got a soft spot for functional languages like Erlang since my Programming Language Theory class in undergrad where we used ML. I especially like the way Joe and the rest of the people behind Erlang have built it for concurrency via tiny processes that share nothing and have provided a framework for building apps that know how to operate correctly in soft-realtime. It’s a very different way of thinking about building systems and seems to be remarkably effective.

The Erlang guys have to be feeling pretty good to hear that Facebook has used Erlang as a core component of their new chat service. (High Scalability also has a good writeup.) As Facebook engineer Eugene Letuchy describes it, their implementation uses XHR long polling which means tons of open HTTP connections. Spread this out over 70 million potential users and it’s not hard to see that Apache would break down pretty quickly. Basically it sounds like they have tons of Erlang processes servicing these connections and holding messages and presence events for users in memory if there’s not an open connection to the client.

Eugene mentions the challenge of delivering presence information as being more difficult than real-time messaging. (Something I thought a lot about when building Effusia.) He lays out the issues inherent in broadcasting presence on every state change in the form of a nasty worst-case asymptotic complexity:

The naive implementation of sending a notification to all friends whenever a user comes online or goes offline has a worst case cost of O(average friendlist size * peak users * churn rate) messages/second, where churn rate is the frequency with which users come online and go offline, in events/second.

However, he doesn’t really go into any detail on how they solved this problem. I can only assume they used some form of periodic polling on a need-to-know basis and/or coalescing friend presence updates in such a way that they’re only occasionally sent to a user.

A few other interesting notes… Apparently they used C++ to do the chat logs as Erlang is not that great at raw I/O. They also apparently use Thrift to glue everything together. (Reminds me I need to look into Thrift in more detail.)

May 08

Towards the Web OS

Cesar Torres analyzed the new Facebook profile design yesterday as did TechCrunch. Both come to the same conclusion: Facebook is trying to become a web operating system. Both cite the use of a Mac OSX-style menu design (a weak indicator of an “OS” in my opinion) but Cesar goes farther to mention Facebook’s application platform, chat client and data portability developments. So, does an app platform plus a menu bar plus chat an OS make? Even a nascent OS? My answer to that is that Facebook, while acting more and more like a web-based OS is just a part of the Web OS.

What is an OS?

The whole concept of what makes an OS is a bit hard to pin down. At this point, I’d love to offer up the definition of an OS from my undergrad operating systems book but it appears to be missing chapter one. Suffice it to say it would have said something about CPU management, process management and probably something about batch jobs (it was really old when I got to it in 2000). Instead I’ll quote Wikipedia’s article on the topic:

An operating system is the software component of a computer system that is responsible for the management and coordination of activities and the sharing of the resources of the computer. The operating system (OS) acts as a host for application programs that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications.

If you’re a programmer and you see something that promises to relieve you from the details of something, you know you’re seeing an abstraction. Programmers love abstractions. In the name of simplification, we layer abstraction after abstraction on top of each other until we feel like we’ve got something easy enough to be productive. (And then we write something on top of that.) We write an OS to hide the hardware, we write programming languages that hide the details of the OS, we write APIs to make tasks easier in the programming languages and on and on and on. Fundamentally, an OS is just an abstraction over the bare metal.

This definition of the OS as an abstraction layer for application development is, of course, pretty technical. I would say that most people consider an OS (if they were to consider it) as the thing that lets them run their email app, their word processor or their web browser. In general, it should fade into the background and let people get on with what they want to get done using their actual apps. If it does bring itself into the foreground it should be by providing shiny eye candy and not with shiny blue screens.

Here’s the kicker though… Beyond just running our apps, we also expect our OS to allow them to share data. As OSes have evolved, we’ve come to think of them less as a resource management tool and more of a collection of useful apps that can all work reasonably well together when facilitated by the OS environment. This goes back at least to the Unix concept of small programs that take an input, perform a useful function and provide an output. This allows for all sorts of nifty command line piping and chaining of outputs to inputs; all facilitated by the operating system. I believe you could say this pattern is the precedent for today’s mashups.

So here’s a working definition. An OS is the thing that manages the hard low-level stuff so people can write apps so we can get things done by using and combining those apps. So does a web OS do this but just on the web?

Are we there yet?

I think we do have a nascent Web OS but it’s not one that’s provided by a single entity. It’s not Facebook or Google. It goes across the web as a whole. Each of these services provides a component of the overall experience much in the same way that Unix utilities do. Though Facebook search reminds us of OSX Spotlight; search on the web OS is located at google.com (for now).

Of course, the Web OS is being built beyond just our user experience interacting with web sites through our browser. The data sharing of the Web OS is being enabled by a host of technologies available to application developers. At the lowest level, we have the fundamental architecture of the web itself, HTTP, which allows for any kind of resource to be located and accessed. We can use HTTP to access any sort of data, but the critical types of data to the web OS are the meta ones that we can use to actually describe data: XML, JSON and even HTML. These form the basis for all the APIs we’ve come to know and love from all the web applications out there that have our data. Over the past few years we’ve begun to scratch the surface of what we can do when we can pipe information from one app to another via the environment provided us by this Web OS. From Google Maps mashups to the nearly infinite number of Twitter apps, we’re beginning to leverage these services in all kinds of ways. Once application developers can begin to rely on these services to abstract away the complexity of performing certain tasks whether that’s raw data storage, identity, search, messaging or the social graph then we’ll have a true Web OS.

Right now, there are still too many barriers. Many (maybe most) apps don’t allow data to move from place to place or put up walled gardens around what APIs and platforms they do offer. With data portability, we’re moving in the right direction (primarily with social data at least), but we’re not there yet. And, of course, there are still tons of applications that don’t (and may never) live in the cloud and so can’t participate fully in the emerging OS.

If we’re not there yet, where are we going?

We’ve still got a long way to go yet with the Web OS, but the pieces are emerging. Applications are moving into the cloud and we’re in the early days of programmatic communication between our various web utilities. We’re progressively abstracting away the hard parts of building really complex and powerful systems. That’s why programmers obsess about abstractions; they make hard things simple. And when hard things become simple, you’ve just taken a step to making even harder things possible. Ultimately the Web OS goes way beyond the placement of menus in Facebook, it will provide us the platform to make the previously impossible possible.