Kill hashtables, get shorter code

Driving late at night two months ago I wondered, why do we return hashtables from functions?  Isn’t a hashtable like a function?  Instead of calling some function to get a hashtable, then looking up values on that hashtable with keys, why not simply call the function directly each time I want a value?  Just make the ‘key’ the last argument of the new function.  So I tried it on the code from my last post, where I explained hierarchical document clustering and showed some Clojure that does it.  In this post I’ll show how I eliminated hashtables in that code and got a shorter codebase, comparing the already pretty short original to the new code.  

The best way to show the change will be to walk you through what happens in a call to euclidean, which calculates the euclidean distance of two documents or an arbitrarily deep tree of documents, based on their word frequencies.    

In the original, ordinary version, I get a hashtable of the word frequencies of a tree of documents with a call to freq, which recursively combines the frequencies (hashtables) of the branches of the tree, ultimately calling freq-files on each leaf, which is a document, of the tree. freq-files returns a hashtable of frequencies for that a document.  We will pass the hashtable we get back from freq into euclidean.

In contrast, in this new version, we just call euclidean before getting any frequencies. euclidean calls freq once for each word in the corpus.  Each call to freq calculates the frequency for just that one word, again by recursively combining the frequency of that word of each of the branches, and ultimately calling freq-files on each document in the tree, which calculates just that word’s frequency for that document.  

This is a little weird.  Do you see what has happened here?  That data now is no longer represented with a hashtable - instead it’s not represented at all.

Doing all those function calls sounds grossly inefficient but it shouldn’t.  For one thing, who cares?  It’s just performance.  We can always profile and deal with problems like that later.  But second, it isn’t really any less efficient than getting the whole document worth of frequencies all at once, because now I’m memoizing - caching function return values.  I’ve wrapped all of the functions in that chain of calls down to and including freq-files (and beyond) in memoize [1].  For example, it looks like I’m reading files from disk every time I call freq-files.  But memoizing to-words prevents that, so following a first call, every time freq-files is called with the same file tree (but probably a different word) the memozied to-words returns a cached word list. frequencies-m (the memoized version of frequencies), is in turn called with that word list as its argument, a call which would be somewhat computationally costly but since it has a cached value associated with that argument it simply returns that.  This frequency needs to be relative, so we’ve got to divide by the word count of the doc.  I’ve got count wrapped in memoize too, calling it count-m, which I call on the results of calling to-words again, which again returns a cached value.  

The really cool thing is that that data is still there.  It’s just that you don’t see it or manipulate it in the code anymore.  It’s implicit.  

This change chops the code down by about 20%, from 55 non-comment lines of code to 44 (or 39, not counting the boilerplate function memoization, which would be a 30% reduction) [2]. 

This style of programming has downsides.  For me, wrapping functions in memoize means my editor / IDE can’t tell what arguments my functions take any more (a common problem when you have first class functions).  That sort of sucks.  Further, it makes some functions, like euclidean, hopelessly non-general, since euclidean now has to know what function to call to get a frequency [3].  It makes it harder to track down bugs.  When you get all the values you want from each function in a chain of functions, always passing the entire hashtable of them up the chain, it’s easier to figure out where a problem has come up than if you ask about a single value all the way down the call chain.  But the main disadvantage to this technique is that it doesn’t match the way I code.  I like to program very interactively.  For example, the first thing I did when I started coding this stuff up was to slurp up the documents in question and call frequencies on them.  With the resulting hashtables in hand I wrote code to find the distance between two document “vectors”.  With these distances I wrote the code to find the shortest ones, and so forth.  Calling the entire function chain to get each value is only possible after you’ve written that chain of functions.  While you’re building them it’s easier to pass all the data from each step to the next step.  But I haven’t really tried coding in this style from scratch yet, so who knows?  Maybe it’s got some other hidden advantage I don’t know about. 

But it’s hard to argue with brevity.  And by the way, there’s nothing inherently lisp-ish about this technique.  You can do it in any language with first class functions, like Python or Ruby, just as easily.

Comment on this post at News.YC.

In the Bay Area and need help with your machine learning project? Contact me at info@reatlas.com or twitter.com/herdrick  




[1]  And we’ve been doing some of that all along anyway, since we often call freq with the same arguments, over multiple calls to euclidean.

[2]  There’s actually another related change here that isn’t so interesting, but definitely helped.  In the original version, the word-list - the list of all words that appear somewhere in the documents - is calculated immediately upon calling cluster, and passed into each subsequent recursive call to cluster.  From there it’s passed into best-pairing, which passes it into euclidean, the only place where it is actually used. I did this because getting this list requires some file reading and computation and it seems ineffiecient to recalculate it every time I call euclidean, which is a lot. But that was the wrong way to think about this problem. When I went ahead and did the inefficent thing, the right fix became clear: memoization, again.  best-pairing’s argument is a list of file-trees in which every file in our corpus is represented, so we can calculate our list of words from that - so I did.  Now every call to euclidean makes a call to word-list.  But this isn’t such a problem because word-list and most of the functions it calls are memoized.  So the only additional cost of calling that function are several memo lookups - negligible - and flattening and sorting a file tree.  Which isn’t nothing but I don’t think it’s a big deal.  I should profile it to find out.

[3] Actually I could make euclidean more general by just sticking with the original version.  Because Clojure hashtables can be called as functions, I can just call euclidean like this: (euclidean (partial freq pof1) (partial freq pof2) (word-list pofs)), using closures created with partial function application as the frequency arguments.  Seemed a little harder to explain though, so I skipped it.

  1. arda-anderson reblogged this from herdrick
  2. lindie-mitchell reblogged this from herdrick
  3. herdrick posted this