RMS, titanic

One afternoon in 1996, as I worked with my partners at our software startup, the phone rang. I answered it, and a voice on the other end said, “Richard Stallman?”

This was disorienting. Richard Stallman was the legendary technologist who had created the Free Software Foundation, dedicated to freedom from corporate and government control for those who program computers and those who use them. He founded the GNU project, dedicated to creating an alternative to the Unix operating system unencumbered by patents and copyrights. He was famously ensconced in an office at MIT, not a house in a northern California suburb doubling as office space for our startup. Why would someone call us looking for him, there?

Or did the caller think I was Stallman??

The moment was even more baffling because I was then at work (as a side project) on a book about Stallman’s other great creation, Emacs, the text editor beloved by programmers. So there wasn’t no connection between me and Stallman. But he wasn’t involved in my writing project; he had merely invented the thing it was about. That was a pretty slender thread. How do you get from that to expecting to find the great man himself in our humble headquarters?

Three years earlier I did work briefly with Stallman, after a fashion. The GNU project was releasing a new file-compression tool called gzip. Stallman wanted files compressed by gzip to have names ending with “.z”. In an e-mail debate with him, I argued that this would make them too easy to confuse with files created by “compress,” a predecessor to gzip, which used a “.Z” filename suffix. The distinction between uppercase “.Z” and lowercase “.z” would be lost if those files were ever stored on, or passed along by, an MS-DOS computer, which permitted only monocase filenames. Stallman, in his typical mulish way, refused to allow any consideration of how Microsoft software behaves to influence what the GNU project should do. But I was insistent, not least because I believed that the potential for confusion would harm the reputation of the GNU project, and I wanted GNU to succeed. I was on Stallman’s side! I was joined in my opinion by a couple of others on that thread. In the end Stallman relented, and as a result gzip used (and still uses) the filename suffix “.gz”.

This was a rare concession from a man whose primary goal with the Free Software Foundation was the repudiation, on principle, of the entire edifice of intellectual property law. The creation of actually useful software was only ever secondary to that goal.1 To the extent that Microsoft owed its existence to intellectual-property plunder, Stallman would have seen it as a moral obligation not to allow it to affect the design of GNU gzip.

Stallman was never one to allow pragmatism to overcome principle, an outlook that extended far beyond his professional pursuits and into all aspects of his public persona, with results often off-putting and occasionally problematic. In principle, why should anyone object to an impromptu solo folk dance in the middle of a fancy restaurant (as recounted in Steven Levy’s recent Wired article)? No one should, of course — in principle. In practice, most of us would agree there are good reasons to keep your spontaneous folk-dancing inhibitions in place. But Stallman is not most of us. In principle, it’s merely being intellectually honest to engage in a little devil’s-advocate hypothesizing on the Jeffery Epstein scandal, and how Stallman’s colleague Marvin Minsky might have been involved. In practice, for a prominent public figure — one with authority over others — to do so at this moment, and in that way, betrays at best a cluelessness that’s just this side of criminal. It’s what forced Stallman to resign recently from the organization he’s led for over three decades.

But in 1996, when the phone rang at my startup, Stallman was, to me and my colleagues, simply a legendary hero hacker and fighter against oppression. When I said, “Hello?” and the voice on the other end said, “Richard Stallman?” the effect on me wouldn’t have been too different if it had said, “Batman?”

I stammered something along the lines of, sorry, this is Zanshin, in California; Richard Stallman works at the Massachusetts Institute of Technology. The voice said, “No, this is Richard Stallman.” What I had taken for a question mark was really a period. (Or possibly an exclamation point.)

In principle, it makes perfect sense to shorten, “Hello, this is Richard Stallman” to “Richard Stallman.” Those four other syllables seem superfluous; might as well save the effort it takes to utter them. In practice, of course, it is decidedly odd when placing a phone call simply to declare your own identity and expect your intention to be understood, especially when you leave off anything like, “May I speak to Bob Glickstein please?”

Stallman was calling me, it turns out, because of the book I was writing. He wanted to know if I would consent to giving the book away for free. (A few years later Stallman would put the same pressure on his biographer, Sam Williams, as recounted in the Salon.com review of Williams’ book.) I said that I was not unsympathetic to his request — after all, Emacs, the topic of my book and the output of many programmer-hours of labor, was distributed for free by the FSF. But how could I consent, when my publisher had production and marketing costs to recover? What about the value of all the time I had invested, couldn’t I reasonably expect some compensation for that, especially since I was not yet drawing any salary from my startup? I additionally thought, but did not say out loud, that unlike Stallman himself I had not earned a MacArthur genius grant to fund my writing and programming whims.

Stallman had no answer for the questions I posed, other than to reiterate a few times his certainty that the book should by rights be free. We ended our call, and (as it turned out) our professional association, at a stalemate on this topic.

As with the gzip episode, I was nominally on Stallman’s side. I would have given serious consideration to his request if he could have compromised somehow, or if he could have spoken about the prospects for earning revenue from a product even when it’s given away for free, or, hell, if he could simply have articulated some understanding of or sympathy for the objections I raised. But he was doctrinaire. The principle was the one and only consideration for him.

The paradox of Richard Stallman is that this single-mindedness made him remarkable and allowed him to achieve remarkable things; but his disregard for pragmatism in favor of an insistence on principle cost him the goal of freely distributing my book, on this occasion — and, on another occasion twenty-odd years later, also cost him his career.

  1. Ironically it’s that secondary goal at which the FSF has been more successful by far (despite the many who have rallied to Stallman’s anti-copyright banner — myself included, with varying degrees of conviction over the years). Intellectual property law is as constraining to individuals and organizations as ever. But you and I and everyone we know and, not to put too fine a point on it, our entire modern information economy, depend daily on infrastructural software created by the FSF. []

Kill Ralphie! saved!


[Cross-posted at kill-ralphie.blogspot.com/2015/06/kill-ralphie-saved.html.]

In the 1980’s, students and faculty at Carnegie Mellon University were on the Internet, but there was no World Wide Web yet – no browsers, no websites, no Google, Facebook, or YouTube; in fact, no video and almost no graphics, just text. But there still existed social communities online, organized into discussion forums on numerous topics. Usenet was the biggest of these. Carnegie Mellon had its own internal collection of discussion forums called bboards.

One bboard was called “Kill Ralphie!” When someone posted to Kill Ralphie, they were contributing a chapter to an ongoing story about a hapless lad who is alternately placed in immediate mortal danger, then rescued, both in the most creative and entertaining ways possible. I was an enthusiastic participant back then, along with many others at CMU. Writing for an audience of fellow contributors was a formative experience for me that improved my prose and humor skills from “immature” to “slightly less immature.”

Well, guess what? Kill Ralphie! lives again! I’ve taken that old pastime and turned it into a fun new website. Please check it out, contribute chapters, and enjoy: kill-ralphie.com.

The Brick Prison Playhouse

It’s the thirtieth anniversary of The Brick Prison Playhouse.


Alumni of Hunter College High School always seem compelled to mention that it’s where they attended the seventh through twelfth grades, when others would simply say “where I went to high school.”

It’s understandable. First there’s the confusing name of the place: it’s neither a college nor merely a high school. Second, when you’re in the habit of telling stories from high school, and some of them take place in 1978 and some take place in 1984, unless you’re diligent about the seventh-through-twelfth disclaimer sooner or later someone is going to do the mental arithmetic and wonder.

As a junior, late in 1982, a few friends and I felt the urge to write and perform a collection of short one-act plays. With faculty help we ended up founding The Brick Prison Playhouse (so called because the school’s appearance earned it the affectionate nickname “the brick prison”), a repertory group for performing student-written plays, as opposed to the existing repertory groups that performed established plays and musicals.

Our first performances took place on February 10th and 11th, 1983. They were a success and a lot of fun. After the last performance the entire playhouse group trekked through Central Park in a light snowfall to the Upper West Side apartment of our friend Michael, where we had a memorable cast party — and ended up snowed in. The only reason I know the exact dates is because it was the great New York Blizzard of 1983.

The next morning, I had to make it back to Queens, but transit had been only partially restored throughout the city. Exiting Michael’s building I was amazed to discover that Broadway was navigable only via a shoulder-high snow trench, just wide enough for two pedestrians to squeeze past each other. Through this narrow channel I worked my way downtown to where working buses and subways could be found — with my also-Queens-bound friend Steve in tow, on crutches with a broken ankle!

(Steve was the best writer in our group. The most talented actor among us was Andrew. I’m pleased to report that today Steve is a professional writer and Andrew a professional actor.)

On the radio program Fresh Air the other day, I heard an interview with the journalist Chris Hayes. In it, he mentions that he grew up in New York City, attended a school from the seventh through the twelfth grades, and performed in a student-written play in the eighth grade. From this I concluded (correctly) that Hayes is a Hunter alumnus, and that The Brick Prison Playhouse still exists!

It occurs to me this is the second blog post in a row where I lay claim to an unacknowledged legacy. Well, acknowledged or not, this one’s an agreeable legacy to have, and the Brick Prison Playhouse’s near-mention on Terry Gross’s widely heard radio show is a nice little brush with fame on this, its thirtieth anniversary.

Where were you in ’62?


Happy birthday to Beatlemania! The Beatles’ first single, “Love Me Do,” was released on this date fifty years ago. And happy birthday to James Bond! Dr. No, the first movie in the world’s longest-running film franchise, also opened today, also in 1962.

Earlier this year we observed the fiftieth anniversary of John Glenn’s historic orbit of the Earth, and the fiftieth anniversary of Kennedy’s landmark “we choose to go to the moon” speech.

In 1962, Stan Lee and the other adolescents at Marvel (I use the term affectionately) created Spider-Man, Thor, and the Hulk. Fifty years later, those creations are still relevant enough to star in their own brand-new blockbuster films.

The films Lawrence of Arabia, To Kill a Mockingbird, and The Music Man are fifty years old too. The famous escape from Alcatraz happened fifty years ago. The Seattle Space Needle opened to the public. Polaroid introduced its instant color film. Rachel Carson published her world-changing book, Silent Spring.

I don’t recall celebrating so many fiftieth anniversaries last year, do you? Something about 1962 appears to have been so special that we are still celebrating its achievements and events.

It wasn’t all good. People everywhere braced for global annihilation during the Cuban Missile Crisis, fifty years ago this month. Marilyn Monroe OD’d. But there was enough nostalgia for 1962 that, years later, George Lucas set the events of American Graffiti in that year, and it’s also when the action in Animal House takes place.

What was it about 1962? Fifty years from now, what events or achievements of today will people still be celebrating?

Predicting the present


One day long ago, when the IBM PC was still new, my friend Mike asked me to imagine my ideal computer. I described something very like the IBM PC, but with more memory and a bigger hard drive — 50 megabytes, say, instead of 10 or 20. I couldn’t imagine any use for much more than that. (Today of course you can’t even buy a thumb drive that tiny.) I grudgingly allowed that a bitmap display might be more useful than the 80-column-by-24-line character terminal that PC’s had, but that was all I would consider adopting from the then-brand-new Apple Macintosh, which I dismissed as a silly toy unworthy of Real Programmers.

“Why?” I asked Mike. “What’s your ideal computer?”

Mike described something no bigger than an 8.5×11 sheet of paper and no more than an inch or so thick, whose entire surface was a full-color display. It could be carried in the hand or slipped into a backpack. “What about the CPU, where would that go?” I asked. I wasn’t getting it. Mike patiently explained that the whole system — CPU, RAM, video driver, power supply — was inside that little slab. I scoffed. Cramming everything into such a small space was obviously impossible, and no battery that could fit in such a thing would ever have enough power to spin a floppy disk drive for long. “Anyway, even if you could build it,” I told him, “it wouldn’t be as convenient as you’d like. You’d have to carry around a keyboard too and plug it in every time you wanted to use it.” No you wouldn’t, said Mike. The display could be touch-sensitive. The keyboard could be rendered on the screen as needed and input accepted that way.

This was 1984. What Mike described was pure science fiction. (In 1987 that became literally true, when the touch-controlled “padd” became a staple prop on Star Trek: The Next Generation.) Yet here I am, the proud new owner of a Nexus 7, the latest in high-powered touch-sensitive computing slabs that put even Mike’s audacious vision to shame.

It wasn’t the first time I’d had a failure of technological vision, nor was it the last.

Several years earlier, before even the IBM PC, I was spending a lot of afterschool hours at my friend Chuck’s house, and a lot of those hours on his dad’s home computer, one of the only ones then available: the beloved but now mostly forgotten Sol-20. (The TRS-80 and the Apple ][ were brand new and just about to steal the thunder from hobbyist models like the Sol-20.) It had a small black-and-white monitor that could display letters, numbers, typographical marks, and a few other special characters at a single intensity (i.e., it really was “black and white,” not greyscale). It looked like this:

The display was so adequate for my meager computing needs there in the late 1970’s that when the computer magazines I read started advertising things like Radio Shack’s new Color Computer (that’s what it was called — the “Color Computer”), I dismissed them as children’s toys.

Once, Chuck and I entertained the idea of making a little science fiction movie. A scene in Chuck’s script had a person’s face appearing on a computer monitor and speaking to the user. It was his plan to film this scene using his father’s computer. I said, “How are we going to make a face appear on a computer monitor?” I had only ever seen letters and numbers blockily rendered on it. Chuck pointed out that the monitor was really just a small TV. “Oh yeah,” I said, feeling stupid. It ought to be able to display anything a TV could. Of course we’d have to hook it up to a different source; obviously no computer could handle rendering full-motion video. Yet here I am, a software engineer at YouTube.

There’s more. In the mid 80’s, my sometime boss Gerald Zanetti, the commercial food photographer and computing technophile, once described his vision for composing and editing photographs on a high-resolution computer display. If a photograph included a bowl of fruit, he explained, he wanted to be able to adjust the position of an orange separately from the grapes and the bananas surrounding it. I said that such technology was far in the future. I’d seen graphics-editing programs by then, but they treated the image as a grid of undifferentiated pixels. Separating out a foreground piece of fruit from other items in the background simply was not feasible. Yet just a couple of years later Photoshop exactly realized Zanetti’s vision.

In the mid 90’s, when the web was new, my friend and mentor Nathaniel founded a new company, First Virtual, to handle credit card payments for Internet commerce. At the time there was no Internet commerce. Nathaniel and company invented some very clever mechanisms for keeping sensitive credit-card information entirely off the Internet while still enabling online payments. But I felt their system was too complicated to explain and to use, that people would prefer the familiarity and convenience of credit cards (turns out I was right about that), and that since no one would (or should!) ever trust the Internet with their credit card information, Internet commerce could never amount to much. Yet here I am, receiving a new shipment of something or other from Amazon.com every week or two.

Oh well. At least I’m in good company. I’m sensible enough finally to have learned that however gifted I may be as a technologist, I’m no visionary. Now when someone describes some fantastical new leap they imagine, I shut up and listen.

Homeopathic democracy

What was the cry of the Boston rioters in 1773? “No taxation without representation.” What would they have said about taxation with just a teensy amount of representation?

Before George Washington became the first chief executive under the U.S. Constitution, he presided over the Philadelphia Convention at which the Constitution was drafted. Throughout the entire proceedings — which had its fair share of passionate disputes — Washington spoke up exactly once on an issue under debate. The proposed size of the U.S. House of Representatives was too small, he said. It would have meant one congressman for every 40,000 citizens. He insisted that one per 30,000 would produce a better, more responsive democracy, and so the change was made.

Within thirty years, as new states were admitted and the population grew, the ratio had grown to the level against which Washington had argued: 40,000 citizens per congressman. Four decades later and it more than tripled: 127,000 to 1. By this time the population of the country had grown from under two million to over 31 million, and the House of Representatives had gone from a cozy 65 members to a rowdy 241.

In 1912 the House of Representatives swelled to 435 members — roughly one for every 212,000 citizens — and there it was capped by legislation in 1929, by which time the ratio was more than 280,000 to 1.

If we had maintained that ratio, today the House would have 1,093 members. If we had maintained Washington’s ratio, today it would have 10,291 members. As it is we’re stuck with 435 — fewer than one congressperson for every 700,000 people. Some U.S. citizens have proportionally more Senate representation than House representation.

What was the cry of the Boston rioters in 1773?1 “No taxation without representation.” What would they have said about taxation with just a teensy amount of representation? No wonder our democracy is presently working about as well as a course of Oscillococcinum (the flu remedy so dilute that one dose contains none of its so-called active ingredients).

A ten thousand member Congress would probably be unmanageable. But a nation of three hundred million is (I think we can conclude from current events) ungovernable by a legislature so relatively small. Perhaps it’s time to reconstitute our national government. Here’s one idea off the top of my head for bringing representation closer to the people: add a level of hierarchy between the state and federal levels, according to the ten regions of American politics.

  1. No evocation of the modern day Tea Party is intended. []

Quitting time

On this date fifteen years ago, several employees of NCD Software, formerly Z-Code, resigned simultaneously. I was one of them.

Two years earlier, Z-Code’s founder, Dan, sold out to Network Computing Devices over the objections of most of his staff. NCD, whose line of business had no discernable overlap with Z-Code’s, proceeded to drive Z-Code and itself right into the ground. Dan was the first casualty, lasting only a few months after the merger. NCD’s CEO and top VP, informally known as “the Bill and Judy show,” followed not long after. A lot of clueless mismanagement ensued. The energy of our once terrific engineering team dissipated before our eyes. We tried to turn things around, to make our bosses understand (for instance) that you can’t just tell an e-mail software team to make their e-mail suite into one of those newfangled web browsers that the new CEO had heard so much about, or that if you don’t pay your salespeople a commission for selling the company’s software, they won’t sell the company’s software.

Each time management did something boneheaded, we convened a session of “The Alarmists’ Club,” which met at lunch over beers and tried to think of ways to effect change at NCD. After enough of those proved fruitless, our discussions turned to how we could do things better ourselves. And so some time early in 1996 we sought the advice of a Silicon Valley lawyer about how to leave NCD en masse with minimal legal repercussions. The bulk of the advice was to put off discussion of any new venture until after the separation was complete; and to be aware that NCD was liable to use veiled threats, emotional pleas, and vague promises in an attempt to get us not to leave.

On 14 February 1996, NCD did all these things. We had prepared our terse resignation letters, offering two weeks notice, and delivered them in the morning. Within a couple of hours, Mike Dolan, one of the bigwigs from NCD headquarters in Mountain View, made the trip to the Z-Code offices in Novato to meet with us individually.

I was not yet 30, and when Dolan, an industry veteran, leaned on me in our one-on-one meeting I was definitely cowed. But my co-resigners and I had coached one another on how to withstand exactly the sort of combined intimidation and guilt trip that I was now getting, and so I stuck to my guns, kept the pointless justifications to a minimum, and refrained from blame or recrimination.

We maintained our solidarity, and because NCD declined our offer of two weeks’ notice, that was our last day there. We left feeling victorious, though what exactly we had won was never clear, and our sense of triumph was tempered by having effectively sandbagged our erstwhile coworkers.

After enjoying a few days of freedom it was time to start planning our new enterprise. But that’s another story…

Greatest hits: Shame

The publisher Tim O’Reilly wrote in a Buzz post recently,

I’ve always loved the ancient Greek idea of shame – aidos – as that quality that restrains people from doing wrong

which inspired me to add the following comment:

In a biography I once read of George Washington, the author (whose name, alas, I can’t remember at the moment) pointed out that his virtues, and those of many of his contemporaries, seem almost superhuman by today’s standards. By way of explanation he pointed out that life expectancy was much shorter then, so the pressure to achieve renown that would outlive you was consequently greater (not to mention that in a less populous world, such renown was within easier reach). You were gonna die soon, that was almost certain — but shame could kill your legacy, a more thorough and fearsome kind of death.

I think this has something to do too with the decline of shame (in addition to other obvious causes such as the rise of privacy, isolation, and anonymity). By and large we now live long enough to get over anything shameful that may happen. We see it happen again and again on the evening news, as disgraced public figures make unlikely comebacks. VH-1’s “Behind the Music” has turned the familiar arc of shame and redemption into a cottage industry. Shame is no longer something to be avoided at all costs. More’s the pity.

Right move made

Before the iPhone and the Blackberry was the Sidekick, a.k.a. the Hiptop, the first mass-market smartphone and, for a while, the coolest gadget you could hope to get. Famously, and awesomely, the Hiptop’s spring-loaded screen swiveled open like a switchblade at the flick of a finger to reveal a thumb-typing keyboard underneath, one on which the industry still hasn’t managed to improve. Your Hiptop data was stored “in the cloud” before that term was even coined. If your Hiptop ever got lost or stolen or damaged, you’d just go to your friendly cell phone store, buy (or otherwise obtain) a new one, and presto, there’d be all your e-mail, your address book, your photos, your notes, and your list of AIM contacts.

The Hiptop and its cloud-like service were designed by Danger, the company I joined late in 2002 just as the very first Hiptop went on the market. I worked on the e-mail part of the back-end service, and eventually came to “own” it. It was a surprisingly complex software system and, like much of the Danger Service, required continual attention simply to keep up with rising demand as Danger’s success grew and more and more Sidekicks came online.

Early in 2005, the Danger Service fell behind in that arms race. Each phone sought to maintain a constant connection to the back end (the better to receive timely e-mail and IM notices), and one day we dropped a bunch of connections. I forget the reason why; possibly something banal like a garden-variety mistake during a routine software upgrade. The affected phones naturally tried reconnecting to the service almost immediately. But establishing a new connection placed a momentary extra load on the service as e-mail backlogs, etc., were synchronized between the device and the cloud, and unbeknownst to anyone, we had crossed the threshold where the service could tolerate the simultaneous reconnection of many phones at once. The wave of reconnections overloaded the back end and more connections got dropped, which created a new, bigger reconnection wave and a worse overload, and so on and so on. The problem snowballed until effectively all Hiptop users were dead in the water. It was four full days before we were able to complete a painstaking analysis of exactly where the bottlenecks were and use that knowledge to coax the phones back online. It was the great Danger outage of 2005 and veterans of it got commemorative coffee mugs.


The graphs depict the normally docile fluctuations of the Danger Service becoming chaotic

The outage was a near-death experience for Danger, but the application of heroism and expertise (if I say so myself, having played my own small part) saved it, prolonging Danger’s life long enough to reach the cherished milestone of all startups: a liquidity event, this one in the form of purchase by Microsoft for half a billion in cash, whereupon I promptly quit (for reasons I’ve discussed at by-now-tiresome length).

Was that ever the right move. More than a week ago, another big Sidekick outage began, and even the separation of twenty-odd miles and 18 months couldn’t stop me feeling pangs of sympathy for the frantic exertions I knew were underway at the remnants of my old company. As the outage drew out day after day after day I shook my head in sad amazement. Danger’s new owners had clearly been neglecting the scalability issues we’d known and warned about for years. Today the stunning news broke that they don’t expect to be able to restore their users’ data, ever.

It is safe to say that Danger is dead. The cutting-edge startup, once synonymous with must-have technology and B-list celebrities, working for whom I once described as making me feel “like a rock star,” will now forever be known as the hapless perpetrator of a monumental fuck-up.

It’s too bad that this event is likely to mar the reputation of cloud computing in general, since I’m fairly confident the breathtaking thoroughness of this failure is due to idiosyncratic details in Danger’s service design that do not apply at a company like, say, Google — in whose cloud my new phone’s data seems perfectly secure. Meanwhile, in the next room, my poor wife sits with her old Sidekick, clicking through her address book entries one by one, transcribing by hand the names and numbers on the tiny screen onto page after page of notebook paper.

Score one for the engineers

I’ve been asked about the reason for my low opinion of Microsoft. It isn’t just me of course — a lot of technologists regard Microsoft that way. Here’s an anecdote that illustrates why.


The year is 1993. No one’s ever heard of the World Wide Web. Few people have even heard of e-mail. Too often, when I explain my role at the e-mail software startup Z-Code to friends and relatives, I also have to explain what e-mail is in the first place.

Those who do know about e-mail in 1993, if transported to 2009, would not recognize what we call e-mail now. To them, e-mail looks like this:

It’s all plain, unadorned text rendered blockily on monochrome character terminals. For the most part, variable-width, anti-aliased fonts are years in the future. Boldface and italic text exist only in the imagination of the reader of a message that uses ad hoc markup like *this* and _this_. Forget about embedded graphics and advanced layout.

However, in 1993 something has just been invented that will catapult e-mail into the future: the MIME standard, which permits multimedia attachments, rich text markup, and plenty more. Almost no one has MIME-aware e-mail software yet. Meanwhile, at Z-Code, we’re busy adding MIME capabilities to our product, Z-Mail. The capabilities are primitive: for instance, if we detect that a message includes an image attachment, we’ll launch a separate image-viewing program so you can see the image. (Actually rendering the image inline comes much later for everyone.)

The Z-Mail user is able to choose an auto-display option for certain attachment types. If you have this option selected and receive a message with an image attachment, your image-viewing program pops up, displaying the attachment, as soon as you open the message. (Without the auto-display option set, you explicitly choose whether or not to launch the viewer each time you encounter an image attachment.)

There comes the time that the marketing guy at Z-Code asks if we can add automatic launching of Postscript attachments, too. In 1993, Postscript is the dominant format for exchanging printable documents. (Today it’s PDF.) Turns out that a lot of potential Z-Mail users are technically unsavvy business types who exchange Postscript files often, jumping through tedious hoops to attach them, detach them, and print them out. Automatically popping up a window that renders a Postscript attachment right on the screen would be pure magic to them, changing them from potential Z-Mail users into actual Z-Mail users.

But there is a problem. Postscript files differ from image, sound, and other document files in one important respect: whereas those latter types of file contain static, inert data, requiring special programs to render them, Postscript files are themselves full-fledged computer programs. The Postscript renderer is just a language interpreter — like a computer within the computer, running the program described by the Postscript document.

Virtually every Postscript program — that is, document — is completely innocuous: place such-and-such text on the page here, draw some lines there, shade this region, and so on. But it’s perfectly conceivable that a malicious Postscript document — that is, program — can act as a computer virus, or worm, causing the computer to access or alter files, or use the network or CPU in mischievous ways without the user’s knowledge or approval.

So launching the Postscript interpreter with an unknown document is risky at any time. Doing so automatically — as the default setting, no less, which is what the marketing guy wanted — is foolhardy. (The reason it’s generally safe to send Postscript documents to Postscript printers — which include their own Postscript interpreters — is that unlike computers, printers do not have access to resources, like your files, that can be seriously abused.)

We, the Z-Code engineers, explain the situation and the danger. The marketing guy dismisses the possibility of a Postscript-based attack as wildly unlikely. He’s right, but we point out that adding the feature he’s asking for would make such an attack more likely, as word spreads among the bad guys that Z-Mail (a relatively widely deployed e-mail system in its time and therefore a tempting hacking target) is auto-launching Postscript attachments. Marketing Guy argues that the upside of adding the feature is potentially enormous. We say that one spam campaign containing viral Postscript attachments could cripple the computers of Z-Mail users and only Z-Mail users, a potential PR catastrophe. Marketing Guy says that our users don’t know or care about that possibility and neither should we. We say it’s our job to protect our users from their own ignorance.

The issue gets bumped up to Dan, our president, who is clearly leaning toward the marketing guy’s enormous potential upside. But after we vigorously argue the technical drawbacks of the plan and our responsibility to keep our users safe in spite of themselves, he goes with the suggestions from Engineering: do add a Postscript-launching option but turn it off by default, and educate users about the danger when they go to turn it on.


This is a run-of-the-mill example of the kind of tension that exists between Marketing and Engineering in all software companies. Issues like this arose from time to time at Z-Code, and sometimes Engineering carried the day, and sometimes Marketing did. It was a good balance: it took Marketing’s outlandish promises to keep Engineering moving forward, and it took Engineering’s insight and pragmatism to keep the product safe and reliable.

As an industry insider, my impression of Microsoft is that Marketing wins all the arguments, with all that that implies for the safety and reliability of their software.