Monthly Archives: September 2009

Team stein!

Yesterday morning at the doctor’s office I, Bob Glickstein, signed in at the reception desk. I was followed by a man named Milstein. He was followed by a man named Epstein!

Suppose fully 5% of this office’s patients have names ending in “stein” (surely a very generous assumption). The odds of three of those patients showing up in a row at random are slimmer than 8,000 to 1 — and they only get slimmer if the proportion of “stein” patients is less than 5%, as seems likely. (At 2%, the odds shoot up to 125,000 to 1 against.)

The likelier explanation is that it was “stein” day at this particular office. Gratifyingly both Mr. Milstein and Mr. Epstein pronounced it STEEN like I do, not STINE like Drs. Franken- or Ein-. What are the odds of that!

Score one for the engineers

I’ve been asked about the reason for my low opinion of Microsoft. It isn’t just me of course — a lot of technologists regard Microsoft that way. Here’s an anecdote that illustrates why.


The year is 1993. No one’s ever heard of the World Wide Web. Few people have even heard of e-mail. Too often, when I explain my role at the e-mail software startup Z-Code to friends and relatives, I also have to explain what e-mail is in the first place.

Those who do know about e-mail in 1993, if transported to 2009, would not recognize what we call e-mail now. To them, e-mail looks like this:

It’s all plain, unadorned text rendered blockily on monochrome character terminals. For the most part, variable-width, anti-aliased fonts are years in the future. Boldface and italic text exist only in the imagination of the reader of a message that uses ad hoc markup like *this* and _this_. Forget about embedded graphics and advanced layout.

However, in 1993 something has just been invented that will catapult e-mail into the future: the MIME standard, which permits multimedia attachments, rich text markup, and plenty more. Almost no one has MIME-aware e-mail software yet. Meanwhile, at Z-Code, we’re busy adding MIME capabilities to our product, Z-Mail. The capabilities are primitive: for instance, if we detect that a message includes an image attachment, we’ll launch a separate image-viewing program so you can see the image. (Actually rendering the image inline comes much later for everyone.)

The Z-Mail user is able to choose an auto-display option for certain attachment types. If you have this option selected and receive a message with an image attachment, your image-viewing program pops up, displaying the attachment, as soon as you open the message. (Without the auto-display option set, you explicitly choose whether or not to launch the viewer each time you encounter an image attachment.)

There comes the time that the marketing guy at Z-Code asks if we can add automatic launching of Postscript attachments, too. In 1993, Postscript is the dominant format for exchanging printable documents. (Today it’s PDF.) Turns out that a lot of potential Z-Mail users are technically unsavvy business types who exchange Postscript files often, jumping through tedious hoops to attach them, detach them, and print them out. Automatically popping up a window that renders a Postscript attachment right on the screen would be pure magic to them, changing them from potential Z-Mail users into actual Z-Mail users.

But there is a problem. Postscript files differ from image, sound, and other document files in one important respect: whereas those latter types of file contain static, inert data, requiring special programs to render them, Postscript files are themselves full-fledged computer programs. The Postscript renderer is just a language interpreter — like a computer within the computer, running the program described by the Postscript document.

Virtually all Postscript programs — that is, documents — are completely innocuous: place such-and-such text on the page here, draw some lines there, shade this region, and so on. But it’s perfectly conceivable that a malicious Postscript document — that is, program — can act as a computer virus, or worm, causing the computer to access or alter files, or use the network or CPU in mischievous ways without the user’s knowledge or approval.

So launching the Postscript interpreter with an unknown document is risky at any time. Doing so automatically — as the default setting, no less, which is what the marketing guy wanted — is foolhardy. (The reason it’s generally safe to send Postscript documents to Postscript printers — which include their own Postscript interpreters — is that unlike computers, printers do not have access to resources, like your files, that can be seriously abused.)

We, the Z-Code engineers, explain the situation and the danger. The marketing guy dismisses the possibility of a Postscript-based attack as wildly unlikely. He’s right, but we point out that adding the feature he’s asking for would make such an attack more likely, as word spreads among the bad guys that Z-Mail (a relatively widely deployed e-mail system in its time and therefore a tempting hacking target) is auto-launching Postscript attachments. Marketing Guy argues that the upside of adding the feature is potentially enormous. We say that one spam campaign containing viral Postscript attachments could cripple the computers of Z-Mail users and only Z-Mail users, a potential PR catastrophe. Marketing Guy says that our users don’t know or care about that possibility and neither should we. We say it’s our job to protect our users from their own ignorance.

The issue gets bumped up to Dan, our president, who is clearly leaning toward the marketing guy’s enormous potential upside. But after we vigorously argue the technical drawbacks of the plan and our responsibility to keep our users safe in spite of themselves, he goes with the suggestions from Engineering: do add a Postscript-launching option but turn it off by default, and educate users about the danger when they go to turn it on.


This is a run-of-the-mill example of the kind of tension that exists between Marketing and Engineering in all software companies. Issues like this arose from time to time at Z-Code, and sometimes Engineering carried the day, and sometimes Marketing did. It was a good balance: it took Marketing’s outlandish promises to keep Engineering moving forward, and it took Engineering’s insight and pragmatism to keep the product safe and reliable.

As an industry insider, my impression of Microsoft is that Marketing wins all the arguments, with all that that implies for the safety and reliability of their software.

Science limerick

Posted moments ago on Facebook in response to a challenge from They Might Be Giants for “science limericks”:

Is space made of strings or of foam?
Is it flat? Does it curve like a dome?
  Does time go both ways?
  Is the cosmos a phase?
I don’t know, but I still call it home

Kai-Fu Lee and me

For the summer of 1987 I had two programming internship job offers. One — the one I accepted — was from Nathaniel Borenstein, who’d been my professor for a comparative programming languages course and liked my take on the design for an e-mail filtering language, which is what the school’s Information Technology Center (ITC) would pay me to implement. The other was to work on a speech recognition project with a different Carnegie Mellon researcher, Kai-Fu Lee. That project had a strong artificial-intelligence flavor, which appealed to me at the time; but after a semester as Nathaniel’s student I knew and liked him, whereas I’d met Kai-Fu Lee only once, for the job interview. That meeting was cordial enough, but I went with the known quantity and the rest is history.

I next heard of Dr. Lee in the 90’s, when he was a senior researcher for Microsoft. He made headlines when he fled Microsoft for Google — just as I did a few years later.

Now comes the news that Kai-Fu Lee is leaving Google. That’s too bad for Google, but at least we still have Al Spector — who was Nathaniel’s old boss and mine at the ITC!

You can’t spell leisure without (some of the letters in) socialism

The future, as seen from the 1920’s through the 1960’s, was one in which automation of ever-increasing ubiquity and reliability would liberate humans from every manner of drudgery: cooking, cleaning, driving, working. Thus liberated, the “permanent problem” of humanity, as celebrated economist John Maynard Keynes wrote in 1930, would be “to occupy the leisure” time that would be the inevitable result of consistent technological and economic progress.

Well, here we are in the future, and in spite of a conspicuous dearth of hovercars and Mars colonies, things are indeed fantastically more automated than they used to be. Those of us old enough to remember changing typewriter ribbons, getting up from the couch to turn the channel knob, and painstakingly placing the tone arm in the shiny stripe between songs would never go back. Cars aren’t driverless — yet — but some of them do unlock when their owners approach, and some of them tell you when you’re about to back into the car behind you. Robots vacuum your floors. Satellites tell you how to get from point A to point B. And don’t forget the Internet, which allows you to shop, work, communicate, renew your driver’s license, look up airline schedules, and be informed and entertained without ever leaving the house, licking a stamp, picking up the phone, or indeed engaging any muscles north of your elbows.

And yet, I don’t know about you, but figuring out what to do with our copious leisure time doesn’t appear to be the problem of anyone I know.

Here in the era of Google and PDF files I am much more productive than I ever could have been in the bad old days of filing cabinets and mimeograph machines, and the same is true for pretty much everyone else, everywhere in the developed world. And after various innovations or outright revolutions in manufacturing, construction, supply chain management, materials science, agriculture, finance, chemical engineering, electronics, and plenty more, the cost of meeting our basic material needs is much less than it used to be.

So at first glance it seems like there should be lots more slack in our economic system, and that we ought to be able to distribute that slack to the benefit of everyone.

But when robots displace thirty percent of a factory’s labor force, the increase in productivity does not result in a life of leisure for the workers that were sent home. They’re just plumb out of work. When simpler delivery systems for news and for classified advertising come along, employees in the crumbling newspaper industry don’t kick back, job-well-done, satisfied at achieving their own obsolescence.

The investment blogger Brad Burnham recently pointed out that “Craigslist collapsed a multibillion dollar classified advertising business into a fabulously profitable hundred-million-dollar business” — an example of a phenomenon common enough to have a cool new name: the “zero-billion-dollar business.” Herein lies the problem that seems to have escaped the mid-century futurists: when dramatic efficiencies arrive in an industry, lowering its overhead, that industry doesn’t suddenly become more profitable, pocketing the difference between the new lower costs and the same old price for its goods and services, able to retire its laid-off laborers with cushy pensions. No: the industry passes the savings along to you, the consumer, according to the inexorable pressures of capitalism. Any company that didn’t would find itself undercut by its competitors. As a result, the entire industry deflates, occasionally to the vanishing point: witness the fate of horse-drawn buggies, ice vendors, and more recently, consumer-grade photographic film.

Disruptions like these are great for the majority (else they wouldn’t happen) but disastrous for those who become idled by them. In the past, the people affected would slowly filter into new positions elsewhere, but as is often observed, we’re living through a period of accelerating innovation and upheaval. It’s possible that entire job categories are disappearing faster than the remaining ones are able to absorb the jobless, and if we haven’t quite reached that tipping point yet, chances are good that we will soon. Technology and the enhanced productivity it brings means society is learning to get along — thrive, in fact — with far fewer people working, period.

Which begs the question: is this kind of progress ultimately good for humanity? Yes, it lowers the cost of our material needs, increases abundance, and lengthens and improves our lives, but only for those who remain employed and can afford the fruits of progress.

Take this trend to a plausible extreme. When driverless cars are perfected, there will be no more need for bus, truck, and taxi drivers. A coffeemaking robot in my office portends the demise of the barista. Voice recognition keeps getting better and keeps putting phone operators out to pasture. The postal service appears to be at the beginning of what promises to be a lengthy contraction.

It’s not hard to imagine a future in which only a small fraction of the eligible workforce is actually needed to do any work. Is the resulting wealth destined to be concentrated in fewer and fewer hands? What will the rest of us do?

In our march towards a shiny future of leisure we have overlooked one important ingredient, probably because it’s been taboo even to mention it. In a 2,500-word article about the world to come, written soon after the 1964 World’s Fair (which depicted that future temptingly and convincingly), and not coincidentally at the height of the Cold War, Time magazine glosses over the missing ingredient almost completely, giving it just three words at the beginning of this remarkable sentence (emphasis mine):

With Government benefits, even nonworking families will have, by one estimate, an annual income of $30,000 — $40,000 (in 1966 dollars).

(That’s about a quarter million today.)

That’s right: at the same time that Americans were getting worked up about the Red Menace, ironically they also embraced (without quite thinking it through) a vision of the future that depended fundamentally on socialism — the redistribution of wealth, by government, from those whom society needs and rewards to those whom it doesn’t but who stubbornly continue to exist.

Unfortunately, even as we’re headed towards a workerless society that will depend more and more on government assistance, we are abandoning our traditional values about civic responsibility and the common good. We are becoming a nation of selfish graspers who by and large would rather demonize the unemployed than provide for them (even if we could afford to, which isn’t at all clear). Too many Americans are opposed in principle to any form of welfare, even though it’s right there in the Preamble of the Constitution, even though they rely on social programs themselves, knowingly or not.

These folks cling to two soundbites from the 1980’s — “Government is not the solution to our problem, government is the problem,” and “Greed… is good” — in lieu of any reasoned philosophy. An entire generation’s worth of politicians and civic and religious leaders have built their careers around these empty ideas, all but precluding rational debate on the subject, a debate we desperately need to have. We are barreling towards that efficient, workerless future, that’s for certain. But when the merest suggestion of government assistance prompts mobs to equate President Obama with Hitler or Satan, what hope is there that that future will even be livable?