Today I am a man… for thirty years

Thirty-one years ago I was a very secular Jew, along with my family and a large proportion of Jewish families in New York City. We lit candles on Chanukah, we read the Haggadah at Passover, and we told each other happy new year in the middle of September, but that was about it as far as the religion went, and it suited me fine.

But then my friends started having bar mitzvahs and I got jealous. So some time in 1979 I informed my parents — who had left the decision up to me, and who thought they were getting off the hook without planning a bar mitzvah — that in fact I wanted to have one and that it had to be before the year was out. I didn’t want to be the only one of my friends whose bar mitzvah spilled over into the next decade!

To have a bar mitzvah I had to be able to read Hebrew, which meant going to Hebrew school, something that bar-mitzvah-bound kids began doing at age eight or nine; and here I was already pushing thirteen, the bar mitzvah age. Forest Hills Jewish Center, a conservative synagogue, wouldn’t take me, because I was too old. (A year later, Yoda would make the same complaint about training Luke Skywalker.) But Temple Sinai, a reform synagogue (now The Reform Temple of Forest Hills), did.

I was the biggest kid in the class but a motivated student. Within just a couple of months I was reading Hebrew fluently — which is to say, I learned the alphabet and the pronunciation, and so could make all the right sounds. Comprehension was something else altogether.

Rabbi Irvin Ungar set my bar mitzvah for the fifteenth of December — just made it! — and began my training. I started attending sabbath services each week to become familiar with the sequence of events and the liturgy. I learned how to chant my Torah portion (“Vayeshev”) and my haftarah. It was my first serious exposure to ritual and I took to it like a duck to water. Combined with Rabbi Ungar’s learned and gregarious mentoring style, and influenced by the involvement of my friend Chuck with his synagogue, I became a surprisingly observant Jew, to the delight of my parents (who, as noted above, were not themselves particularly observant).

While I was receiving religious instruction, my parents were busy planning the reception. They booked a ballroom at the Sheraton in Elmhurst and sent invitations to the extended family. I invited some of my new Hunter friends and a few from my elementary school days. A couple of months before the event, I stopped eating chocolate and fried food entirely, determined that this was the best way to ensure blemish-free skin on the big day. (And it worked!)

The party needed music, and my parents began looking into bands and DJ’s. One musician (with the memorable not-to-be-confused-with-the-auto-repair-chain name Lee Myles) offered to come to our house with a videotape of his band performing — and to bring along a videocassette player, which in 1979 almost no one had. I was beside myself with excitement at the prospect of seeing one of those contraptions in operation in my very own living room, and when he arrived, everything he said to my parents was just so much droning. It took forever before he finally stopped talking and hauled the enormous player out from its carrying case, along with its multifarious cables and adapters. That’s when I finally joined in the conversation, chattering away about the relative merits of coax connectors versus spade lugs, VHS versus Betamax, tuning via channel 2 versus channel 3, etc. In the end we got to see about thirty disappointing seconds of fuzzy video footage before all the equipment got disconnected and put away.

We didn’t hire Lee Myles.

Everything finally came together on this date thirty years ago.


That’s me in the white turtleneck. Also pictured: three future lawyers.

I conducted my parts of the Saturday-morning service so well that I was invited to become Temple Sinai’s first official “rabbi’s assistant,” a position I held for many weeks thereafter. I delivered an original speech about Judaism and becoming a man and so on that I remember not at all, but that was received (atypically for a bar mitzvah speech) attentively and with disbelief that I’d written it myself. And the reception, though mostly a blur, was memorable at least for the poster-sized cartoon wailing wall that my father drew and stood on an easel for my guests to sign (and that became a wall-art fixture at home for years); and for the moment that my friends took me aside and welcomed me to official manhood by literally showering me with foil-wrapped condoms (which were far more giggle-worthy then — and embarrassing to buy — than they are in this age of strident safe-sex awareness).

Some months later, Rabbi Ungar moved far, far away. His replacement, whatever his virtues might have been, was a zero in the motivating-young-people department. My scientific bent (and attendant religious skepticism) reasserted itself, the novelty of a Dixie cup of sweet wine each Saturday morning wore off, and my tenure as rabbi’s assistant, and my flirtation with a devout life, ended soon after.


Postscript. Helen Keller was one of my mom’s heroes, and The Miracle Worker, the story of Keller’s relationship with the blind teacher Annie Sullivan, was one of her favorite movies.

In trying to find a web link for Temple Sinai while writing this article, I ran across an article entitled, “Helen Keller: Citizen of Forest Hills.” It was the first I’d ever heard that my mom’s hero lived in the same neighborhood where (years later) she raised me; I’m not sure my mom ever knew. But more than that — the article reveals that Helen Keller’s Forest Hills house later became the very site of Temple Sinai!

Darnedest family math

Here is an exchange between me and my son Archer (age 5 1/2) this morning.

Archer: Are you Aunt Suzanne’s dad?

Me: No, you know what I am to her. I’m her what?

Archer: Her sister?

Me: No…

Archer: Her brother?

Me: Yes! Who is Aunt Suzanne’s dad?

Archer: Grandpa?

Me: Right. Who’s my dad?

Archer: Grandpa.

Me: Right! Who’s your dad?

Archer: You!

Me: Right. Who’s your brother?

Archer: Jonah.

Me: Who’s your sister?

Archer: Pamela.

Me: Who’s my brother?

Archer: [thinks hard] …Nobody?

Me: Right! It was a trick question. But I didn’t fool you, did I?

Archer: [excitedly] No. ’Cause my brain said, “I never heard Daddy say he had a brother before.” So I added that to my brain and then I took away the brother and my brain said, that’s right!

Darnedest negotiation

Yesterday Andrea and I celebrated our tenth wedding anniversary (and our twenty-first year of togetherness). To get some alone time, we packed the kids off to the house of some friends.

I asked them to get together the things they’d need for an overnight. They disappeared into their room and came back out into the living room a minute later with an armload of stuff apiece. But Jonah forgot his socks, and he was feeling lazy, so he said to Archer, “If you go get me some socks, I’ll give you…” (and here he thought for a moment) “…a hug!”

Archer said, “OK!” at once and disappeared back into their room — whereupon Jonah leaned over to me and whispered, “I’m actually going to give him a hug and a kiss!”

The richest man in town

Earlier today I sold my last shares of Amazon.com stock remaining from Amazon’s 1998 purchase (in cash, stock options, and shares) of the Internet Movie Database, a company I co-founded. This brings to a close an adventure that began as a hobby in the mid-1990’s, that turned into a job, that yielded riches, glamor, excitement, and renown (not to mention tedium, anguish, and heartache, but nothing worthwhile is easy).

At its peak during the dot-com boom, my ownership of Amazon.com was worth millions. Thanks to the dot-com crash and some bad planning, I ended up extracting only a fraction of that value, and I still haven’t entirely gotten over it. But it’s hard to feel too bad: it was a great ride, and with the proceeds we bought some cool toys and took some fun trips. It allowed me to earn practically nothing while launching another startup, where today my wife and several others earn a comfortable living. With Amazon money we had a terrific wedding, got a cozy home, and started an amazing family. Like George Bailey, I am the richest man in town.

Here’s lookin’ at you, Amazon. Thanks for everything.

Right move made

Before the iPhone and the Blackberry was the Sidekick, a.k.a. the Hiptop, the first mass-market smartphone and, for a while, the coolest gadget you could hope to get. Famously, and awesomely, the Hiptop’s spring-loaded screen swiveled open like a switchblade at the flick of a finger to reveal a thumb-typing keyboard underneath, one on which the industry still hasn’t managed to improve. Your Hiptop data was stored “in the cloud” before that term was even coined. If your Hiptop ever got lost or stolen or damaged, you’d just go to your friendly cell phone store, buy (or otherwise obtain) a new one, and presto, there’d be all your e-mail, your address book, your photos, your notes, and your list of AIM contacts.

The Hiptop and its cloud-like service were designed by Danger, the company I joined late in 2002 just as the very first Hiptop went on the market. I worked on the e-mail part of the back-end service, and eventually came to “own” it. It was a surprisingly complex software system and, like much of the Danger Service, required continual attention simply to keep up with rising demand as Danger’s success grew and more and more Sidekicks came online.

Early in 2005, the Danger Service fell behind in that arms race. Each phone sought to maintain a constant connection to the back end (the better to receive timely e-mail and IM notices), and one day we dropped a bunch of connections. I forget the reason why; possibly something banal like a garden-variety mistake during a routine software upgrade. The affected phones naturally tried reconnecting to the service almost immediately. But establishing a new connection placed a momentary extra load on the service as e-mail backlogs, etc., were synchronized between the device and the cloud, and unbeknownst to anyone, we had crossed the threshold where the service could tolerate the simultaneous reconnection of many phones at once. The wave of reconnections overloaded the back end and more connections got dropped, which created a new, bigger reconnection wave and a worse overload, and so on and so on. The problem snowballed until effectively all Hiptop users were dead in the water. It was four full days before we were able to complete a painstaking analysis of exactly where the bottlenecks were and use that knowledge to coax the phones back online. It was the great Danger outage of 2005 and veterans of it got commemorative coffee mugs.


The graphs depict the normally docile fluctuations of the Danger Service becoming chaotic

The outage was a near-death experience for Danger, but the application of heroism and expertise (if I say so myself, having played my own small part) saved it, prolonging Danger’s life long enough to reach the cherished milestone of all startups: a liquidity event, this one in the form of purchase by Microsoft for half a billion in cash, whereupon I promptly quit (for reasons I’ve discussed at by-now-tiresome length).

Was that ever the right move. More than a week ago, another big Sidekick outage began, and even the separation of twenty-odd miles and 18 months couldn’t stop me feeling pangs of sympathy for the frantic exertions I knew were underway at the remnants of my old company. As the outage drew out day after day after day I shook my head in sad amazement. Danger’s new owners had clearly been neglecting the scalability issues we’d known and warned about for years. Today the stunning news broke that they don’t expect to be able to restore their users’ data, ever.

It is safe to say that Danger is dead. The cutting-edge startup, once synonymous with must-have technology and B-list celebrities, working for whom I once described as making me feel “like a rock star,” will now forever be known as the hapless perpetrator of a monumental fuck-up.

It’s too bad that this event is likely to mar the reputation of cloud computing in general, since I’m fairly confident the breathtaking thoroughness of this failure is due to idiosyncratic details in Danger’s service design that do not apply at a company like, say, Google — in whose cloud my new phone’s data seems perfectly secure. Meanwhile, in the next room, my poor wife sits with her old Sidekick, clicking through her address book entries one by one, transcribing by hand the names and numbers on the tiny screen onto page after page of notebook paper.

Team stein!

Yesterday morning at the doctor’s office I, Bob Glickstein, signed in at the reception desk. I was followed by a man named Milstein. He was followed by a man named Epstein!

Suppose fully 5% of this office’s patients have names ending in “stein” (surely a very generous assumption). The odds of three of those patients showing up in a row at random are slimmer than 8,000 to 1 — and they only get slimmer if the proportion of “stein” patients is less than 5%, as seems likely. (At 2%, the odds shoot up to 125,000 to 1 against.)

The likelier explanation is that it was “stein” day at this particular office. Gratifyingly both Mr. Milstein and Mr. Epstein pronounced it STEEN like I do, not STINE like Drs. Franken- or Ein-. What are the odds of that!

Score one for the engineers

I’ve been asked about the reason for my low opinion of Microsoft. It isn’t just me of course — a lot of technologists regard Microsoft that way. Here’s an anecdote that illustrates why.


The year is 1993. No one’s ever heard of the World Wide Web. Few people have even heard of e-mail. Too often, when I explain my role at the e-mail software startup Z-Code to friends and relatives, I also have to explain what e-mail is in the first place.

Those who do know about e-mail in 1993, if transported to 2009, would not recognize what we call e-mail now. To them, e-mail looks like this:

It’s all plain, unadorned text rendered blockily on monochrome character terminals. For the most part, variable-width, anti-aliased fonts are years in the future. Boldface and italic text exist only in the imagination of the reader of a message that uses ad hoc markup like *this* and _this_. Forget about embedded graphics and advanced layout.

However, in 1993 something has just been invented that will catapult e-mail into the future: the MIME standard, which permits multimedia attachments, rich text markup, and plenty more. Almost no one has MIME-aware e-mail software yet. Meanwhile, at Z-Code, we’re busy adding MIME capabilities to our product, Z-Mail. The capabilities are primitive: for instance, if we detect that a message includes an image attachment, we’ll launch a separate image-viewing program so you can see the image. (Actually rendering the image inline comes much later for everyone.)

The Z-Mail user is able to choose an auto-display option for certain attachment types. If you have this option selected and receive a message with an image attachment, your image-viewing program pops up, displaying the attachment, as soon as you open the message. (Without the auto-display option set, you explicitly choose whether or not to launch the viewer each time you encounter an image attachment.)

There comes the time that the marketing guy at Z-Code asks if we can add automatic launching of Postscript attachments, too. In 1993, Postscript is the dominant format for exchanging printable documents. (Today it’s PDF.) Turns out that a lot of potential Z-Mail users are technically unsavvy business types who exchange Postscript files often, jumping through tedious hoops to attach them, detach them, and print them out. Automatically popping up a window that renders a Postscript attachment right on the screen would be pure magic to them, changing them from potential Z-Mail users into actual Z-Mail users.

But there is a problem. Postscript files differ from image, sound, and other document files in one important respect: whereas those latter types of file contain static, inert data, requiring special programs to render them, Postscript files are themselves full-fledged computer programs. The Postscript renderer is just a language interpreter — like a computer within the computer, running the program described by the Postscript document.

Virtually every Postscript program — that is, document — is completely innocuous: place such-and-such text on the page here, draw some lines there, shade this region, and so on. But it’s perfectly conceivable that a malicious Postscript document — that is, program — can act as a computer virus, or worm, causing the computer to access or alter files, or use the network or CPU in mischievous ways without the user’s knowledge or approval.

So launching the Postscript interpreter with an unknown document is risky at any time. Doing so automatically — as the default setting, no less, which is what the marketing guy wanted — is foolhardy. (The reason it’s generally safe to send Postscript documents to Postscript printers — which include their own Postscript interpreters — is that unlike computers, printers do not have access to resources, like your files, that can be seriously abused.)

We, the Z-Code engineers, explain the situation and the danger. The marketing guy dismisses the possibility of a Postscript-based attack as wildly unlikely. He’s right, but we point out that adding the feature he’s asking for would make such an attack more likely, as word spreads among the bad guys that Z-Mail (a relatively widely deployed e-mail system in its time and therefore a tempting hacking target) is auto-launching Postscript attachments. Marketing Guy argues that the upside of adding the feature is potentially enormous. We say that one spam campaign containing viral Postscript attachments could cripple the computers of Z-Mail users and only Z-Mail users, a potential PR catastrophe. Marketing Guy says that our users don’t know or care about that possibility and neither should we. We say it’s our job to protect our users from their own ignorance.

The issue gets bumped up to Dan, our president, who is clearly leaning toward the marketing guy’s enormous potential upside. But after we vigorously argue the technical drawbacks of the plan and our responsibility to keep our users safe in spite of themselves, he goes with the suggestions from Engineering: do add a Postscript-launching option but turn it off by default, and educate users about the danger when they go to turn it on.


This is a run-of-the-mill example of the kind of tension that exists between Marketing and Engineering in all software companies. Issues like this arose from time to time at Z-Code, and sometimes Engineering carried the day, and sometimes Marketing did. It was a good balance: it took Marketing’s outlandish promises to keep Engineering moving forward, and it took Engineering’s insight and pragmatism to keep the product safe and reliable.

As an industry insider, my impression of Microsoft is that Marketing wins all the arguments, with all that that implies for the safety and reliability of their software.

Kai-Fu Lee and me

For the summer of 1987 I had two programming internship job offers. One — the one I accepted — was from Nathaniel Borenstein, who’d been my professor for a comparative programming languages course and liked my take on the design for an e-mail filtering language, which is what the school’s Information Technology Center (ITC) would pay me to implement. The other was to work on a speech recognition project with a different Carnegie Mellon researcher, Kai-Fu Lee. That project had a strong artificial-intelligence flavor, which appealed to me at the time; but after a semester as Nathaniel’s student I knew and liked him, whereas I’d met Kai-Fu Lee only once, for the job interview. That meeting was cordial enough, but I went with the known quantity and the rest is history.

I next heard of Dr. Lee in the 90’s, when he was a senior researcher for Microsoft. He made headlines when he fled Microsoft for Google — just as I did a few years later.

Now comes the news that Kai-Fu Lee is leaving Google. That’s too bad for Google, but at least we still have Al Spector — who was Nathaniel’s old boss and mine at the ITC!

You can’t spell leisure without (some of the letters in) socialism

The future, as seen from the 1920’s through the 1960’s, was one in which automation of ever-increasing ubiquity and reliability would liberate humans from every manner of drudgery: cooking, cleaning, driving, working. Thus liberated, the “permanent problem” of humanity, as celebrated economist John Maynard Keynes wrote in 1930, would be “to occupy the leisure” time that would be the inevitable result of consistent technological and economic progress.

Well, here we are in the future, and in spite of a conspicuous dearth of hovercars and Mars colonies, things are indeed fantastically more automated than they used to be. Those of us old enough to remember changing typewriter ribbons, getting up from the couch to turn the channel knob, and painstakingly placing the tone arm in the shiny stripe between songs would never go back. Cars aren’t driverless — yet — but some of them do unlock when their owners approach, and some of them tell you when you’re about to back into the car behind you. Robots vacuum your floors. Satellites tell you how to get from point A to point B. And don’t forget the Internet, which allows you to shop, work, communicate, renew your driver’s license, look up airline schedules, and be informed and entertained without ever leaving the house, licking a stamp, picking up the phone, or indeed engaging any muscles north of your elbows.

And yet, I don’t know about you, but figuring out what to do with our copious leisure time doesn’t appear to be the problem of anyone I know.

Here in the era of Google and PDF files I am much more productive than I ever could have been in the bad old days of filing cabinets and mimeograph machines, and the same is true for pretty much everyone else, everywhere in the developed world. And after various innovations or outright revolutions in manufacturing, construction, supply chain management, materials science, agriculture, finance, chemical engineering, electronics, and plenty more, the cost of meeting our basic material needs is much less than it used to be.

So at first glance it seems like there should be lots more slack in our economic system, and that we ought to be able to distribute that slack to the benefit of everyone.

But when robots displace thirty percent of a factory’s labor force, the increase in productivity does not result in a life of leisure for the workers that were sent home. They’re just plumb out of work. When simpler delivery systems for news and for classified advertising come along, employees in the crumbling newspaper industry don’t kick back, job-well-done, satisfied at achieving their own obsolescence.

The investment blogger Brad Burnham recently pointed out that “Craigslist collapsed a multibillion dollar classified advertising business into a fabulously profitable hundred-million-dollar business” — an example of a phenomenon common enough to have a cool new name: the “zero-billion-dollar business.” Herein lies the problem that seems to have escaped the mid-century futurists: when dramatic efficiencies arrive in an industry, lowering its overhead, that industry doesn’t suddenly become more profitable, pocketing the difference between the new lower costs and the same old price for its goods and services, able to retire its laid-off laborers with cushy pensions. No: the industry passes the savings along to you, the consumer, according to the inexorable pressures of capitalism. Any company that didn’t would find itself undercut by its competitors. As a result, the entire industry deflates, occasionally to the vanishing point: witness the fate of horse-drawn buggies, ice vendors, and more recently, consumer-grade photographic film.

Disruptions like these are great for the majority (else they wouldn’t happen) but disastrous for those who become idled by them. In the past, the people affected would slowly filter into new positions elsewhere, but as is often observed, we’re living through a period of accelerating innovation and upheaval. It’s possible that entire job categories are disappearing faster than the remaining ones are able to absorb the jobless, and if we haven’t quite reached that tipping point yet, chances are good that we will soon. Technology and the enhanced productivity it brings means society is learning to get along — thrive, in fact — with far fewer people working, period.

Which begs the question: is this kind of progress ultimately good for humanity? Yes, it lowers the cost of our material needs, increases abundance, and lengthens and improves our lives, but only for those who remain employed and can afford the fruits of progress.

Take this trend to a plausible extreme. When driverless cars are perfected, there will be no more need for bus, truck, and taxi drivers. A coffeemaking robot in my office portends the demise of the barista. Voice recognition keeps getting better and keeps putting phone operators out to pasture. The postal service appears to be at the beginning of what promises to be a lengthy contraction.

It’s not hard to imagine a future in which only a small fraction of the eligible workforce is actually needed to do any work. Is the resulting wealth destined to be concentrated in fewer and fewer hands? What will the rest of us do?

In our march towards a shiny future of leisure we have overlooked one important ingredient, probably because it’s been taboo even to mention it. In a 2,500-word article about the world to come, written soon after the 1964 World’s Fair (which depicted that future temptingly and convincingly), and not coincidentally at the height of the Cold War, Time magazine glosses over the missing ingredient almost completely, giving it just three words at the beginning of this remarkable sentence (emphasis mine):

With Government benefits, even nonworking families will have, by one estimate, an annual income of $30,000 — $40,000 (in 1966 dollars).

(That’s about a quarter million today.)

That’s right: at the same time that Americans were getting worked up about the Red Menace, ironically they also embraced (without quite thinking it through) a vision of the future that depended fundamentally on socialism — the redistribution of wealth, by government, from those whom society needs and rewards to those whom it doesn’t but who stubbornly continue to exist.

Unfortunately, even as we’re headed towards a workerless society that will depend more and more on government assistance, we are abandoning our traditional values about civic responsibility and the common good. We are becoming a nation of selfish graspers who by and large would rather demonize the unemployed than provide for them (even if we could afford to, which isn’t at all clear). Too many Americans are opposed in principle to any form of welfare, even though it’s right there in the Preamble of the Constitution, even though they rely on social programs themselves, knowingly or not.

These folks cling to two soundbites from the 1980’s — “Government is not the solution to our problem, government is the problem,” and “Greed… is good” — in lieu of any reasoned philosophy. An entire generation’s worth of politicians and civic and religious leaders have built their careers around these empty ideas, all but precluding rational debate on the subject, a debate we desperately need to have. We are barreling towards that efficient, workerless future, that’s for certain. But when the merest suggestion of government assistance prompts mobs to equate President Obama with Hitler or Satan, what hope is there that that future will even be livable?