Very long term trusts will not take over the world

A few years ago Trevor Blackwell wondered why there appears to be a dearth of old family fortunes, given the historical rates of return on investment, and suggests some reasons. Here’s another: the law. Paul Collins in Lapham’s Quarterly writes that there have been panics about a similar issue - specifically that trusts (family or otherwise) growing at an exponential rate would take over the world, sink the economy, have all the money, eat all the blue M&Ms, etc. So, for example, perpetual trusts were outlawed in England in 1859, and a century later the IRS claimed in court that a particular long term trust would destroy “the tax base of the nation, if not the world”.

None of these trusts had much chance of paying out what their founders calculated or destroying anything. For those of the past ~200 years it looks like unexpectedly high management fees are holding back much of their growth - and why not? With no living founder, who’s going to look carefully for inflated fees, i.e. skimming? And this effect would be much stronger for a very long term trust, where after a century or two there would be no one alive who had ever met the founder nor would ever meet any beneficiaries. (The same applies to dynastic fortunes held outside trusts.)

Further, if the trust is meant to eventually pay out to some long-lived institution, like a city or something, then that city could get its hands on the risk adjusted value of it today by getting someone at an investment bank to facilitate a hidden buy out of the windfall. (Hiding that sort of thing is pretty much what Wall Street does, according to Michael Lewis.) So now some investor(s) has, in essence, obfuscated zero-coupon long term bonds, which are steadily growing in value, and no mandate to hold them. Not very different from any other bond. The fact that they would, by the time they mature, be worth $1 trillion or whatever is something that the economy would have had plenty of time to adjust to. No big deal.

Thinking you can force some institution to keep its hands off some money until all its current managers are long dead seems cute in an era of sophisticated finance. And plans that require continuous virtue over generations are probably not going to work. [1]






[1] Betting on accumulated virtue is a much better plan. Which makes me wonder if maybe the reason that universities tend to persist is that there is little to be gained by raiding them or skimming from them, because what they value goes unnoticed by thieves. But the better modern universities do have a lot of what thieves value: portfolio assets. Will they be raided then?

"Finding deep insight in 350 year old sayings by de La Rochefoucauld discourages me, as it suggests either that I will not be able to make much progress on those topics, or that too few will listen for progress to result. Am I just relearning what hundreds have already relearned century after century, but were just not able to pass on?"

http://www.overcomingbias.com/2009/12/why-read-old-thinkers.html#sthash.cje0nZfK.dpuf

Interest in Spark waning lately? (Updated June 2013)

Spark is far easier to work with than Hadoop, and better for rapid development. Excellent! But the community of Hadoop users is probably two orders of magnitude bigger. That’s bad for Spark. But is the Spark community growing fast enough to make that soon irrelevant?

I counted new topics on the Spark users Google Group by time period.

(UPDATE: I wonder if this is a bad measure. Since this isn’t merely a forum, but also a mailing list, maybe some people leave when traffic rises.)

All of 2011 2
Aug 2012 1
Sept 2012 0
Oct 2012 1
Nov 2012 5
Dec 2012 8
Jan 2013 5
Feb 2013 101
March 2013 150
April 2013 124
May 2013 107


I was loving that growth until April. It may be that this group didn’t became the home of the community until February, meaning that March is the first full month that can be measured. Whether that’s true or not, as of April 22, April is on track to see fewer new threads than March. (UPDATE: Indeed, April ended up with 124 new threads. May, 107.)

Amusingly, after counting these up I found that Google Groups have stats pages: https://groups.google.com/forum/?fromgroups#!aboutgroup/spark-users Those are somewhat different measurements but they show the same trend.

New questions on StackOverflow with Hadoop or Hadoop family tags are at ~135 in the past seven days, a monthly rate of 540. So far there are zero Spark or Shark questions on SO.

(Actually it’s much worse than that. There are some questions on SO about other Shark projects, especially a new Flash/Flex framework by that name. This illustrates the awfulness of the name ‘Spark’. BDAS really blew it in giving their project a name that is a common English word. But that is another post.)

So I am not yet seeing exponential growth in the Spark community. Let’s hope that changes.

Best thing I've read all month

Charles Dickens, rejecting an invitation from a friend: “‘It is only half an hour’ — ‘It is only an afternoon’ — ‘It is only an evening,’ people say to me over and over again; but they don’t know that it is impossible to command one’s self sometimes to any stipulated and set disposal of five minutes — or that the mere consciousness of an engagement will sometime worry a whole day … Who ever is devoted to an art must be content to deliver himself wholly up to it, and to find his recompense in it. I am grieved if you suspect me of not wanting to see you, but I can’t help it; I must go in my way whether or no.”

The boat engine is worth 33,500 Egyptian slaves

image
In 1998 Google was worth 1,838,389 workers I proposed measuring the worth of innovations by estimating the equivalent amount of labor ‘saved’ by using them. But how about the great rapid transportation innovations of the 20th century? Surely those can’t be reduced to human power. Much like you can’t make a baby in one month with nine women, no amount of people can make a vehicle go faster than a running human, right? No. You can do it, and with that insight I’ll show you what the human labor equivalent of a 5 horsepower Evinrude boat engine would be in ancient Egypt. In America, pulling a canal barge with horses or mules on the banks was how midwest grain and beef got from the Great Lakes (and so the entire upper midwest) to the Atlantic. But in some sad times and places, labor was so cheap that humans were riverside draft animals 1. There’s even a name for that job in Russian: Burlak.

Lots of burlaks meant you could haul lots of freight. But with a rope gear - just two spools of different radii ganged together - on an anchored axle you could “gear up” to convert their slow, strong force to a fast weak one. A series of these rigs on the banks of a river, could, with coordination, pull a long, shallow draft boat quickly and continuously. How many burlaks would you need? Let’s say you’ll need 5 horsepower, sustained. I’m pretty sure I could get a long slim, light riverboat with a light load to 20 knots with a 5-horse Evinrude. A person can produce, in a short burst, 1.2 horsepower. Friction between the spool and the axle would eat up some of that, but then the boat engine loses a lot of power to turbulance around the prop, too. So let’s say we need four burlaks pulling at maximum effort at all times. The ancient Egyptians were fine rope-makers - it looks like they could make 100 meter rope strong enough. 2 Experience suggests that 5 knots (~5 mph or 8 kph) is about the ideal speed for a human to generate maximum power - think of a footbal or rugby player driving an opponent backward. That’s 1/4 the speed we want our boat to go, so the right gearing would have our burlaks surge forward only 25 meters while the boat they are pulling covers 100 meters. Our rig will need 125 meters of rope - call it 150 meters just in case. We’ll need one of these rigs (a large double spool anchored into the ground, 150 meters of rope, and 5 burlaks) every 100 meters of the trip - about 6700 of them over the 667 kilometers between Luxor and Alexandria. That’s 33500 burlaks. 3

So if we waive the manufacturing costs of the outboard engine and the rope-spool-axle system (and the work needed to supply gasoline for the motor and food for the burlaks), the 5 horsepower outboard engine 4, when plopped down into ancient Egypt, does the work of about 33,500 slaves. 5

image image




  1. On the Nile in fact. 

  2. The spool size needed seems reasonable for the ancients, too: 19 mm for rope thickness, and 15 inches, 12 inches, and 20 inches for the traverse, barrel diameter, and flange diameter, respectively, gives you a capacity of 137 meters of rope. Close enough. 

  3. This setup is good for more than just the occasional trip by the Pharoah. The burlaks should be able to go all out at least four times an hour, leaving some time to pull the rope back off the spool and lay it out in place, throughout a burlak’s 15 hour day. That’s 60 trips / day on the trans-Nile high speed boating system. You could use it to go in either direction, although some boats would have to wait in places. 

  4. Of course the outboard is worth more as you could use it to cross the Nile, not just go down- or up-river. So we haven’t completely found the human labor equivalent of the outboard. 

  5. I put the painting of burlaks at the top because it’s famous and relevant. But I included photos at the bottom because they are just so tragic. God, human draft animals - the ultimate result of cheap labor. Is there any surer sign that your society has gone wrong? 

Google was worth 1,838,389 workers in 1998, maybe

What is an innovation worth? I’m not asking how much money it makes, because that’s just part of it. To take an extreme example, if you give an invention away to the public it can still provide value to people, it’s just that you’re not getting any of it, or no more than anyone else. But the cash value of the uncaptured part is notoriously hard to quantify. How about a different approach?

Human labor has always been a fundamental good. And lots of new technologies have been called “labor saving devices”. If we can figure out a way to calculate an innovation’s equivalent in human work, we’d have a measure that works across history, even prehistory. Plus we wouldn’t care if the invention was ‘monetizable’, ex. whether it appeared in a period with a legal system defending private property and maybe patents. 1 Maybe best of all we don’t have to worry about the value of various currencies over time, and in fact can value innovations that predate money itself.

The value of some new ideas seems well captured by measuring how much human work they replace. Manufacturing and hanging drywall needs much less effort than lath and plaster. Dynamite and bulldozers remove rock with much less effort than picks and shovels. 2 But what labor is saved by the jet engine? All the laborers in the world couldn’t get you from San Francisco to New York in six hours. 3

Google’s search engine seems to be like that - not a labor saver but something that does what was before impossible. Is it though? Could you measure the value of Google by how much labor it would take to replace it? I think you can. What if, instead of Google’s new software, you just had people? Could you build such a system that could rival, if not today’s Google, then the first Google search engine, from 1998? Google was searching over only 26 million pages at the time. Couldn’t you fulfill a query over those pages given enough ‘librarians’? If we can value even Google this way, then maybe we’ve got a useful scale for innovation.

How about if you divided up the web among your librarians? Before reporting for duty, each would read each page in his or her bailiwick and remember, more or less, what they say. It’s not so unrealistic if you assign people to pages pertaining to things they already know something about. Of course most pages weren’t really about anything, then perhaps more than now. The blog hadn’t been formally invented 4 but ‘home pages’ seemed to make up a majority of the web and few of them were about anything other than the author and his interests. Here’s a surviving example that exemplifies the species: http://jerrypournelle.com/ Notice the multiple sections, “Books and Movie reviews”, “What’s new”, “Reader email”, etc. Remembering what was mentioned in one of those pages wouldn’t be easy.

We can make it easier. Let’s give each librarian the software from one of the existing, crummy pre-Google search engines (or maybe just grep) and set it up so search only their 100 pages. That will give the librarian a good quick start, jog his or her memory, and help a lot with the kind of things that unsophisticated software is good at, like finding exact matches of sentence fragments.

If we assign 100 web pages to each person and their search engine, we’d have the 26 million pages covered by 260,000 librarians. But what if you search for something common, like bill clinton and most of those 260,000 librarians have results? How to pick among them? This is really what the search engine that Google launched in 1998 did that was so great. Its results were ordered in a way that seemed like magic. You searched for that sherlock holmes story with the snake and, sure enough, the first result was The Adventure of the Speckled Band. To replicate that we’re going to need more people to sort out the work of those first 260k people. We need editors.

Let’s start with a layer of editors above the librarians. We assigned 100 pages to each librarian, so why not 100 librarians per editor? That’d be 2600 editors. When a dump of those bill clinton results comes in to an editor, he or she picks the 10 best, in order, and passes them on, declaring them the best 10 results from the 100 librarians he edits. Each of those are assigned 100 web pages, so the the editor’s top 10 results are the best from the 10,000 pages his librarians cover. Now we’ve got 10 results each of our 2600 editors. We need to whittle these down to 10 results to show to the user, who is still staring at the screen, waiting. You can see that all we need are more layers of editors. log base 100 of 260,000 is 2.7 so a total of three layers of these editors is enough. That’ll give us 260,000 librarians, 2,600 first level editors, 26 second level ones, and 1 chief editor: 262,627 workers. If it takes a minute for each layer to do its work, which seems reasonable, then a user gets a result back in three minutes, and the system can handle one query per minute. 5 That’s not much. Luckily this system is easily parallelized. To get another query per minute we simply add another 262,627 workers searching over the same 26 million pages. Apparently Google was doing 10,000 searches per day in 1998. That’s about 7 per minute. 6 To handle that at a steady state, we’ll need 262,627 * 7 = 1,838,389 workers. 7

There you go. On the day Google launched they were providing, free of charge and with less than 1/120th the latency, what you’d need 1,838,389 smart workers to do the day before.

Does this technique work as a scale of innovation? Well, it’s got the nice advantages I mention above. But it can only give you an upper limit on the value of the innovation, since if it paid to do it the labor intensive way, that would have been happening. 8 It needs improvement. What do you think?




Follow me on Twitter


  1. Or whether it appealed to the richer segments of society. Of course this last item is controversial. On one hand perhaps serving the needs of people who are themselves effectively creating (and capturing) value is morally better than otherwise. But that == “it’s morally better to benefit the wealthy than the poor” which surely isn’t true. It’s not surprising that I’ve have waded into this swamp since I’m more or less writing about the labor theory of value, a staple of Marxism. 

  2. Of course you need to amortize the work to build the bulldozers and dynamite. 

  3. You might be able to do better than you’d think though. The way to see what could be done with enough manpower is to imagine yourself a Pharaoh. Better yet, the Pharaoh’s head engineer with unlimited cooperative laborers. Now how fast can you move the Pharaoh from Luxor to Alexandria? I explored that here: The boat engine is worth 33500 Egyptian slaves

  4. Although there were proto-bloggers already. 

  5. I’m describing the worst-case scenario. Often an editor will have less to do for some searches, as when his reports give him fewer than 100 results. You could take advantage of this and drop the lockstep architecture. But no closed form solution to calculate how much more productive you could make the tree of workers comes to mind. You’d probably do best with a Monte Carlo simulation. This optimization would be an interesting problem. 

  6. Actually a lot more than that at peak periods and fewer late at night. But for simplicity we’ll stick with this. 

  7. There are complications. For one thing, if the user asks for the second page of results, everyone has to do the same thing except each editor must pass up the 20 best results, since there’s no way for such an editor to know which, if any of those could ultimately be in the overall top 20. All his fellow editors at his level do the same and now the editor above him has twice as much work to do. Further clicking deeper into the search results makes it worse (only linearly, though). But most people don’t do that and anyway this is supposed to be a first version. 

  8. In this case we don’t really have an upper limit either, since the army of librarians and editors are so much slower than Google, and speed is so important in a search engine. 

Facebook checkin to become the new price for free wifi

Just showing up at Coupa Cafe and connecting to their wifi now automatically does a Facebook checkin there. Good idea for the local business. And when this spreads to embaressing locations it’ll make your Facebook feed a lot more interesting, so it’s good for Facebook too!

Seriously, I think it only checks you in if you’ve already accepted thier TOS once. So the loss of privacy is probably in the ‘one more step’ sweet spot. I predict ubiquity.

My newest hack (with Dave Brushinski)

http://endrank.com/crunchbase

"The Crunchranked.  The 217,000 most important companies, financial firms, and people in the startup world, according to an impartial algorithm."

Comments: http://news.ycombinator.com/item?id=3805555

Display git branch and ‘dirty’ status in fish shell prompt

UPDATE: due to this bug which exists in the version fish I got from macports, I decided against fish. It’s a bad sign when you find that bad of a bug in your first few minutes of using something.

Dissatisfaction with the bash shell, and the feature list and humorous promotion of the new fish shell fork ("Finally, a command line shell for the 90s… You’ll have an astonishing 256 colors available for use!") spurred me to try fish out for a few days. It’ll have to be a lot better than bash to make worthwhile leaving the large bash community and its google-able answers. We’ll see.

The customization in my bash .profile I can’t live without [1] is showing the current git branch in the command prompt. Googling this feature for fish got me most of the way there, with the code found here: https://wiki.archlinux.org/index.php/Fish#Configuration_Suggestions . But it didn’t quite work.  Below is what I got to work. As it also shows if you have staged or unstaged changes I like it better than what I had in bash.

set fish_git_dirty_color red

function parse_git_dirty 

         git diff —quiet HEAD ^&-

         if test $status = 1

            echo (set_color $fish_git_dirty_color)”Δ”(set_color normal)

         end

end

function parse_git_branch

         # git branch outputs lines, the current branch is prefixed with a *

         set -l branch (git branch 2> /dev/null | sed -e ‘/^[^*]/d’ -e ‘s/* \(.*\)/\1/’) 

         echo $branch (parse_git_dirty)     

end

function fish_prompt

         if test -z (git branch —quiet 2>| awk ‘/fatal:/ {print “no git”}’)

            printf ‘%s@%s %s%s%s (%s) $ ’ (whoami) (hostname|cut -d . -f 1) (set_color $fish_color_cwd) (prompt_pwd) (set_color normal) (parse_git_branch)            

         else

            printf ‘%s@%s %s%s%s $ ’  (whoami) (hostname|cut -d . -f 1) (set_color $fish_color_cwd) (prompt_pwd) (set_color normal)

         end 

end

[1] Actually the best customization is changing the maximum size and count of the .bash_history file to be really big so that I can keep a lifetime of shell work.

Professor Sebastian Thrun quits Stanford to teach people

The text of his homepage today:
One of the most amazing things I’ve ever done in my life is to teach a class to 160,000 students. In the Fall of 2011, Peter Norvig and I decided to offer our class “Introduction to Artificial Intelligence” to the world online, free of charge. We spent endless nights recording ourselves on video, and interacting with tens of thousands of students. Volunteer students translated some of our classes into over 40 languages; and in the end we graduated over 23,000 students from 190 countries. In fact, Peter and I taught more students AI, than all AI professors in the world combined. This one class had more educational impact than my entire career. Just watch this video.
Now that I saw the true power of education, there is no turning back. It’s like a drug. I won’t be able to teach 200 students again, in a conventional classroom setting. I’ve just peeked through a window into an entire new world, and I am determined to get there.
(and yes, I gave up my tenured position at Stanford)
I could not be more impressed with Sebastian.  From a story about this:

… the physical class at Stanford… dwindled from 200 students to 30 students because the online course was more intimate and better at teaching…”