Credit next to where credit is due

Earlier today I was walking through my office when I saw this written on the whiteboard in a co-worker’s cubicle:

The most likely way for the world to be destroyed, most experts agree, is by accident. That’s where we come in; we’re computer professionals. We cause accidents.
— Nathaniel Borenstein

Small world. Nathaniel Borenstein was the professor of a comparative programming languages course that I took in the spring of 1987 at CMU (and later became my manager, mentor, and friend). He uttered this now-Internet-famous saying while teaching that course, and I am the one who originally propagated it on to the Internet.

Thought you should know.

No place for common sense

Not that this is especially deserving of a reasoned rebuttal:

Peanut butter disproves evolution

…A (serious) Creationist clip showing how peanut butter disproves the theory of evolution…

The video explains that evolutionists claim that energy plus matter sometimes results in the creation of life. But since no one has ever found spontaneously-generated life in a jar of peanut butter, that means that matter plus energy from the sun couldn’t have caused life on Earth… Link

…but I just happened to have one handy in some old e-mail. An outspoken creationist friend of mine wrote:

there are over 200 million different species on this planet. Since each is (presumably) evolving differently and over time, it seems reasonable to expect that one, only one, just one tiny one, of these 200,000,000 species would have “sprouted wings” in the last 150 years

where I understood “sprouted wings” to mean “underwent a significant, observable evolutionary change.” That may be a common sense outlook, but this is no place for common sense. Common sense breaks down when dealing with fantastically large numbers and fantastically small odds. Here is how I replied:

Let’s say the earth is 4.5 billion years old, and it took all that time to produce 200 million existing species. (We’ll treat the many other species that have come and gone as statistical fluctuations.) That’s 0.044 species per year on average. Over 150 years you should then expect to see the emergence of 6.67 new species on average, which is .00000334% of the total number of species. Easy to miss.

Let’s do it another way: 0.044 species per year is 22.5 years per species — that is, we should expect a new species every 22.5 years. Assuming each of the existing 200 million species is equally likely to spawn that new species, each species must wait an average of 3.12 billion years to have a 50-50 chance of creating a successor.

(That’s

22.5×log1-1/2000000000.5

1-1/200000000 is 0.999999995, which are the odds of a species not spawning a new species in one year. 0.999999995×0.999999995 are the odds of not spawning a new species for two years in a row; 0.999999995×0.999999995×0.999999995 are the odds of not spawning a new species for three years in a row; and so on. How many times must you multiply 0.999999995 by itself to get to odds of 0.5? That’s what log1-1/2000000000.5 tells you.)

That’s by no means a rigorous analysis — it’s full of extremely coarse assumptions, among other things — but it should be at least accurate enough to convey the vastness of the timescales involved, the number of species, and the odds against having any particular evolutionary expectation met.

Or, as Darryl Zero said,

Now a few words on looking for things. When you go looking for something specific, your chances of finding it are very bad, because of all the things in the world, you’re only looking for one of them. When you go looking for anything at all, your chances of finding it are very good, because of all the things in the world, you’re sure to find some of them.

Andrea-woman!

Speaking of superheroes, my wife Andrea has a few amazing superpowers. For instance, she has the power to make strangers tell her intimate details of their lives. I’ve seen it happen! Perhaps on another occasion I’ll write about that power at greater length, but I fear that if the government ever gets wind of what she can do, they’ll ship her undercover somewhere and we’ll never see her again.

One of her lesser superpowers was demonstrated a couple of years ago when I bought the DVD set of HBO’s Harold and the Purple Crayon series for my kids. The day it arrived from Amazon I unwrapped it and played the first episode. The kids were delighted. When the end credits rolled, I was mildly surprised to see Sharon Stone’s name as the narrator.

A little later, while we were watching another episode, Andrea came home. I asked her, “Can you guess whose voice that is doing the narration?”

Andrea listened for a few moments and thought, then said, “Sharon Stone?”

You could have knocked me over with a feather.

Now, Sharon Stone is a beautiful woman and a fine actress. But I think even her most ardent fans would agree that her voice, while pleasant, even attractive, is not particularly distinctive. It’s generically feminine, with no unique accent or timbre or phrasing. To my ears, the voice reading that narration in a soothing, maternal fashion could be anyone’s. Furthermore, Andrea — unlike me — is conspicuously inattentive to the world of Hollywood and celebrities. Movies, to her, are to be watched, hopefully enjoyed, and then largely forgotten. Movie stars mean almost nothing to her, and with the hoopla surrounding Basic Instinct and Casino more than a decade in the past, Sharon Stone in particular was not readily brought to mind. (Sorry, Sharon.)

If I hadn’t known it was Sharon Stone, and someone had asked me to guess whose voice it was (indicating, by the very asking, that the answer must be a surprising celebrity), I would have said Meryl Streep or Madonna or somebody. But in under ten seconds Andrea came back with, “Sharon Stone.”

I have satisfied myself that Andrea had no secret foreknowledge of the answer, and that no ordinary human (who’s not a friend or a devoted fan of Sharon Stone) could have gotten the right answer so quickly, and on the first try. The only remaining explanation: it’s a superpower.

Now all that remains is figuring out what possible application this power can have in the fight against supervillainy.

Plotting Pirates

Mystery Man On Film is sponsoring a screenwriting blog-a-thon this weekend. Here’s my contribution. It’s about the script for Pirates of the Caribbean: Curse of the Black Pearl, by Terry Rossio and Ted Elliott.

(By the way, I also have a very different kind of analysis of the same movie.)

Spoilers ahead, says I! Ye be warned.

Continue reading “Plotting Pirates”

Archer’s birthday invitation

At Archer’s upcoming birthday party, the theme is “superheroes.” Guests will choose from a wide variety of capes, masks, and insignia to customize their superheroic alter egos.

Here’s the invitation. The real-world version has the black and red elements printed on transparency, which is then affixed along its top edge — like a cape — to a backing of yellow card stock.

Indiana Jones and the Rolling Roles

The latest Indiana Jones movie, Indiana Jones and the Last Crusade, came out in 1989 and was set in the year 1938. Next year, George Lucas, Steven Spielberg, and Harrison Ford will present a fourth Indiana Jones movie. In real time, 19 years will have elapsed since the last one.

Since Harrison Ford has visibly aged in that time, it’s reasonable to expect that a comparable interval has elapsed in story time between Indy 3 and Indy 4. Let’s say that the story interval is not 19 years but 24. That opens up a pretty interesting story possibility.

It’s 1962. An aging Indiana Jones has made a discovery of tremendous personal importance to himself, something he’s been looking for all over the world for thirty years. And for some reason, the first thing he does is to make his way to a small city in California to track down an obnoxious loudmouth with a fast car and a taste for Stetson cowboy hats — Bob Falfa.

Jones tries to convince Falfa to accompany him on a highly unique project. Mysteriously, Jones tells Falfa that he can divulge no details (“You wouldn’t believe me if I told you”) but, knowing Falfa’s love of fast cars, promises him the chance to drive something faster than anyone’s ever seen.

This was the wrong thing to say. Bob Falfa’s pride is hurt; his own car, he asserts, is the fastest thing on wheels. “And I’ll prove it to you!” Falfa storms off before Jones can get another word in and, almost at once, he goads a local hood, John Milner, into a drag race — which Falfa loses, spectacularly, trashing his car in the process.

Humiliated, Falfa leaves town that very day and changes his identity, swearing off hot rods and Stetson hats in a bid to be untraceable. (But he can’t completely break with the past. His new name, Martin Stett, commemorates his preferred hatmaker.) Stett kicks around for a few years and ends up with a gig in San Francisco as the personal assistant to a wealthy and unsavory businessman known as The Director.

Late one night Stett finds himself in a high-stakes poker game with some hardcore gamblers, including one charming out-of-towner (“I’m just passing through”) who’s losing badly. Out of funds on a big hand, the stranger puts his pink slip in the pot, assuring everyone that it’s for “the fastest hunk of junk in the galaxy.” Stett wins the hand — and learns to his astonishment that he’s the new owner of a spaceship called the Millennium Falcon. The stranger, Lando Calrissian, is devastated but gracious in defeat. He offers to give piloting lessons to Stett in return for a lift back to his home galaxy far, far away.

After dropping off Calrissian at a bustling spaceport, Stett flies around this new galaxy for several years, picking up odd jobs where he’s able and enjoying his new solitude so much that he changes his name again, this time to Solo. Over time he befriends a Wookiee, a Jedi, and a princess, and plays a role in reforming galactic politics.

Feeling nostalgic one day, Solo takes a long flight back to Earth and is a little puzzled to discover that, due to the time-distorting effects of faster-than-light travel, he has arrived years before he left. Thus unable to visit his old stomping grounds — they don’t exist yet! — he makes to leave immediately but the Falcon’s hyperdrive, which has always been finicky, gives out altogether. Solo is stranded on a planet where there are no spare hyperdrive parts for thousands of light years in every direction.

With no other options, he conceals the Falcon in the New Mexico desert and begins researching ways to rebuild the hyperdrive from raw materials available on Earth. His research reveals the existence of ancient Etruscan mineral-smithing techniques that produced artifacts suitable for use in the hyperdrive motivator.

Solo begins hunting for Etruscan artifacts all over the world and is soon drawn into the world of archaeology, for which he has adopted yet another new alias — Indiana Jones — and reindulged his old love of broad-brimmed headwear. Along the way he has numerous new adventures and his repair of the still-concealed Millennium Falcon is sidetracked into an on-again, off-again project whose highlight is a dramatic near-crash during a test flight in 1947.

Finally, by 1962, Jones/Solo/Stett/Falfa has accumulated enough Etruscan jewelry and pottery and so on to build a hyperdrive motivator and complete the Falcon’s repair. However, he is by now old enough that his arthritis robs him of the agility needed to crawl in and among the parts of the Falcon’s engine machinery. What he needs is someone younger, mechanically inclined, and trustworthy. He knows just the person: an aimless young hot-rodder named Bob Falfa. And this time he won’t insult his car…

Elementary, my dear human

On my commute recently I listened to a recording of the talk given last month by Vernor Vinge to The Long Now Foundation on the subject of alternatives to “the Singularity.”

Vernor Vinge is an acclaimed science fiction author and futurist. The Long Now Foundation is an organization of technologists, artists, and others dedicated to pondering the challenges facing society on very long time scales, on the order of thousands of years. And “the Singularity” is a concept invented decades ago by Vinge that says, in effect: technological progress is advancing almost unavoidably to a point (called the Singularity) where technology itself will exceed the intelligence and abilities of humans. After the Singularity, continued technological advancement is in the hands of technology that’s literally superhuman. It proceeds at a superhuman pace according to superhuman motives. Just as our laws of physics break down at the event horizon of a black hole, it is in principle impossible for us to make predictions about the future beyond the Singularity, when things will be as incomprehensible to us humans as, in Vinge’s words, “opera is to a flatworm.”

Although Vinge believes that the Singularity is the likeliest non-catastrophic outcome for the future of humanity (and there are many who agree and many who don’t), his talk to The Long Now Foundation addressed alternative, non-Singularity possibilities. What might prevent the Singularity from occurring? War and various catastrophes on a global scale are obvious ones. But there are two interesting non-Singularity possibilities that Vinge did not discuss.

The less interesting and less likely of the two possibilities is that there is some fundamental limit on the complexity of information processing systems, and human brains are already at or near that limit. If these two suppositions are true, then it is not possible for technology to exceed human reasoning or inventing power by a significant amount — though it would still be possible to employ vaster, harder-working armies of reasoning and inventing machines than it would be to recruit similar numbers of people. (Interestingly, Vinge posits just such a fundamental limitation in his science fiction masterpiece, A Fire Upon The Deep — a rousing and thought-provoking adventure, and the only sci-fi story I’ve ever come across that feels truly galactic in scope.)

Here’s the non-Singularity possibility I like better: though machine intelligence may exceed that of humans, human intelligence can keep up, like Dr. Watson arriving at a conclusion or two of his own while following Sherlock Holmes around, or like me surrounding myself with friends whose superior intellect and wit eventually rubbed off on me, at least a little.

Consider that a hundred years ago, it took geniuses at the pinnacle of human intelligence to devise the counterintuitive physical theories of relativity and quantum mechanics that, today, are grasped (in their rudiments) by children in middle school. Consider that the same race of beings that once gazed up at the heavens and made up fairy tales about the constellations has now charted and explained very much of the visible universe, almost all the way back to the beginning of time — and it took only a few dozen centuries.

Perhaps there are realms of thought and invention that require posthuman brainpower to discover. But I’m optimistic that where our future technology leads, we can follow.

World widescreen web

Thinking of upgrading your conventional picture-tube TV to a fancy new flat-panel widescreen? But you’re on a budget and don’t want to go overboard? Confused about what size TV to buy? You’ve come to the right place.

The main criterion for choosing a screen size is one that I have not seen described in other TV buying guides: viewing area. The viewing area of a 32″ conventional TV is 492 square inches, whereas the viewing area of a 32″ widescreen TV is a mere 438 square inches! If you’re upgrading from a 32″ conventional TV you’ll want at least a 34″ widescreen to get the same viewing area.

Here’s how I arrived at those figures.

The advertised size of a TV display is the length of the diagonal. If from the diagonal we can determine the height of the display, h, and the width, w, then the viewing area is h×w. Thanks to Pythagoras we know that h2+w2 = 322. But this isn’t enough information to determine the viewing area: we also need the fact that the aspect ratio of most conventional TV displays is 4:3, which means the width of the display is four-thirds the height.

Substituting 4h/3 for w and then simplifying gives us:

h2+(4h/3)2 = 322
h2+16h2/9 = 322
25h2/9 = 322
h = √(9×322/25)
h = 3×32/5 = 19.2

Plugging that into the formula for viewing area (h×w) and recalling that w = 4h/3,

h×4h/3 = 19.2×4×19.2/3 = 491.52 square inches

Knowing that the aspect ratio of widescreen displays is 16:9 and using similar arithmetic gives a result of 438 square inches for a 32″ diagonal.

In fact, the math shows that for a given diagonal, the viewing area of a 16:9 display will always be about 11% less than the viewing area of a 4:3 display.

But wait! It’s not as simple as finding the widescreen TV that has at least the same viewing area as your conventional TV. You should also take into account the kinds of programming you watch.

Do you watch a lot of wide-format movies on your 4:3 TV? If so, you’ve certainly noticed the “letterboxing” needed to fit the wide aspect ratio of the film into the narrow one of the display. You’re not using the entire viewing area; some of it is wasted, as much as 32% of it for very wide format formats such as “CinemaScope.” With a 16:9 TV the need for letterboxing wide-format movies is decreased or eliminated.

Similarly, if you watch a lot of conventional TV programming (sitcoms, newscasts, etc.) on a widescreen TV, you’ll get “reverse letterboxing,” also called pillar boxing, where the black bars appear not on the top and bottom but on the left and right of the image to make the taller aspect ratio fit into a shorter one. Here again you’re wasting some of your viewing area.

So think about the kinds of programming you watch and consult this handy table that shows the true image size (in square inches) for various combinations of TV diagonal size, TV aspect ratio, and programming aspect ratio. Choose a TV that gives you the best image size you can afford for the types of programming you typically watch.

Program aspect ratio
1.33
(4:3)
very common
1.66
(5:3)
some movies
1.77
(16:9)
“widescreen”
1.85
(13:7)
VistaVision
2.35
(33:14)
CinemaScope
4:3
screens
20″ 192 154 144 138 109
27″ 350 280 262 252 199
32″ 492 393 369 354 279
36″ 622 498 467 448 353
42″ 847 677 635 610 480
46″ 1016 813 762 732 576
50″ 1200 960 900 865 681
16:9
screens
20″ 128 160 171 164 129
27″ 234 292 312 299 236
32″ 328 410 438 420 331
36″ 415 519 554 532 419
42″ 565 707 754 724 570
46″ 678 848 904 869 684
50″ 801 1001 1068 1027 808

Religion: another view

In previous blog posts I’ve been pretty down on religion. Well, on organized religion. Organized Western religion. But my actual outlook on the subject is more nuanced than I may have made it sound. Let me explain.

It infuriates me whenever someone tells me that religious faith is required in order to keep people moral. Apparently, if it weren’t for the fear of divine retribution, eternal damnation, etc., everyone would be a brute, stealing, raping, killing, and generally behaving badly. We would be in a Hobbesian state of nature. To keep society functioning, it is necessary for everyone to be ruled by fear. To be “God-fearing” is to be gentle and humble.

This is a very dim view of humanity — people can’t be good on their own? — and I’m happy to report that it’s as wrong as can be. In my experience, it’s the atheists and the agnostics who are by far the most moral and decent people: the most ready to lend a hand, the most reluctant to inflict harm, the most community-minded, the least selfish. They are guided not by fear for their immortal souls but by enlightened self-interest: sharing and caring buys you entrée to a culture that shares with and cares for you too. (Perhaps there’s a bit of San Francisco hippie utopianism in there as well.) For them, virtue may or may not be its own reward — it is for me — but at the very least it’s the currency with which a class of rewards can be purchased.

I submit that those who behave in a moral fashion for their own reasons instead of someone else’s are more moral. To such people, religion is probably irrelevant, especially if they’ve outgrown their simian need for a super-father-figure/tribal-leader/alpha-male.

What about everyone else? After all, it is lamentably true that not everyone behaves in a moral fashion on his or her own. Probably most people do not. For many of those, we see again and again on the local news (“if it bleeds, it leads”) how religion does not serve as an effective restraint on their darker lusts and passions, even in spite of occasional sincere belief in divine judgment.

Which leaves the remainder: those people who aren’t moral on their own but whose wrongdoing is effectively prevented by religious belief. They want to murder and steal and covet their neighbors’ wives and kick adorable defenseless puppies, but they don’t because God is watching.

Are there many or few such people? The Talmud says that to save one life is like saving the world. By that reasoning, if just one would-be victim’s life is spared by the inhibiting effects of religion on his or her would-be killer, then religious belief is a good thing. On the other hand, think of all the lives that runaway religious belief has cost over the centuries. In attempting to curtail one kind of evil, religion unleashes another kind. Which way does the scale tip? Does religion do more good than harm, or more harm than good?

Violent fanatics are the dark side of religious belief. Is it possible to have religion without creating fanatics? That would be the best of all possible worlds. I suspect, however, that, just like acting morally, acting fanatically is possible with or without religion to justify it.

…But without a religion to organize around, the damage they could do would be limited. Hmm, I guess I’m down on religion after all.

East is east and west is… wet?

For my first twenty-six years I lived near the East Coast: first in New York City for eighteen years, and then in Pittsburgh for eight.

After that I moved to California and encountered a strange phenomenon: my sense of direction kept getting confounded by having the ocean on the wrong side!

Though I almost never saw the ocean in New York other than when I went to the beach, and of course never ever saw the ocean in Pittsburgh, I still unconsciously navigated by the knowledge that where the ocean was, was east. After getting to California, though again I seldom actually saw the ocean, I had a lot of trouble adjusting to the knowledge that where the ocean was, was now west. In fact, for a while I made a conscious effort to think not of the nearby Pacific Ocean but of the distant Atlantic for purposes of orienting myself around the Bay Area. (And, that worked.)

What makes this interesting is that, years later, I discovered that other East-Coast transplantees had encountered the same strange phenomenon.

I wonder what the larger significance of this phenomenon could be, if there is one. Does it belie some innate primal connection we all have to the sea? Is it related somehow to the way migrating birds navigate by the shapes of shorelines? If there were no ocean nearby but there was a major mountain peak, would I unconsciously relate my position to that instead?