1202

The Jurassic Age of Computers

by

Today, computers have language, culture, and manners. But in the 1950s, they were wild animals who communicated only in the raw grunts and growls of binary or hexadecimal numbers. So when the late J.C.R. Licklider—for many years the Massachusetts Institute of Technology’s most folksy and sagacious philosopher of the Internet—met his first computer, he stood face to face with an inscrutable savage.

On hand to introduce the professor to the beast was its proud young keeper, Ed Fredkin. Licklider wanted to see if the thing could calculate three times four.

Fredkin showed Licklider how to put the numbers, as well as the code that meant “multiply,” into the machine’s accumulator, a surprisingly physical process that involved much setting of switches and pushing of buttons. From there, the numbers would go into the memory registers, where they would be transformed via algorithmic alchemy and spat back out as a result on a small cathode-ray screen.

An unfortunate quirk of the machine was that while the accumulator had 31 bits of storage space, the memory registers had only 29. Anything going into the computer, explained Fredkin, would automatically have its last two digits chopped off. That meant that before the numbers could go into the accumulator, they all had to be multiplied by four, which Fredkin promised would magically take care of the problem. Instead of “3 times 4,” the switches were toggled to code “12 times 16.”.

Upon starting the machine, the result came back quickly as a pattern of dots. Fredkin translated: 48.
What was this? Neither 3 times 4 nor 12 times 16. There must be something wrong with the machine, said Licklider. But no, said Fredkin; because the information moves back from the memory registers into the accumulator before showing up on the display, the accumulator tacks on those two extra digits once again. All you have to do is divide by four and get the answer: 12. (In binary, 48 and 12 are written as “110000” and “1100,”respectively—making the division as easy as converting from cents to dollars.)

But what was child’s play to the technician was clear as mud to the professor. “Ed was overjoyed to see that the machine had gotten the answer right, and I was just a bit apprehensive about the journey I was just then beginning into the land of the computing machine,” Licklider reminisced in 1982, writing on the eve of a digital revolution that he himself helped bring about.

Fifty years ago, there were maybe a few thousand people in the US who could talk to computers, and they were mostly geniuses. Today, a five-year-old can learn how to interact with a computer in an afternoon. This is not because we ourselves have gotten much more sophisticated about how we interact with creatures who speak binary. We haven’t gotten much smarter for about a hundred thousand years. It is because the machines themselves have become so thoroughly domesticated.

The process has been, in retrospect, shockingly fast. It took uncountable centuries for people to breed wolves into Jack Russells. It took less than fifty years to make a howling savage like the early MIT machine give a goofy wave and offer to help us write a letter.

For all its swiftness, the future couldn’t come fast enough for Licklider, who spent most of his career presciently puzzling over “man-computer interfaces.” Licklider, often called the “Johnny Appleseed of computing,” retired in 1985 and died in 1990, sticking around just long enough to see his predictions about personal computing and interconnectivity come true. Most of the actual software he worked to develop is now mummified in the MIT library’s archives and gathering a good deal of dust. Most of his common-sense philosophizing about how computers would change (and be changed by) the world, on the other hand, remains solid and true, even at a distance of decades.

One of his last projects before retiring from MIT was GRAPPLE, a piece of software that allowed you to write a program not by typing in commands in a coding language, but—imagine!—by manipulating graphic icons on a screen, with a relatively new gizmo called a “mouse.” It was written in MDL (pronounced “muddle”), an elegant dead language developed at MIT, making it kin to the legendary text-based game Zork. Licklider and his crew of student programmers grappled with GRAPPLE from 1982 to 1985, whereupon it seems to have sunk gracefully back into the mists of time, leaving many cousins but no direct descendants.

Like many such documents of its day, the November 1984 user’s manual for GRAPPLE is a formidable thing, a brick of Xeroxed paper full of raw code. In its pages, Licklider—who was very well aware of the pedagogical limitations of words on paper—is prone to self-referential griping.

“This manual is rather severely limited by the fact that there is no good way to show, in a printed report, the graphical action involved in programming,” he writes. “It is not far from that statement to the conclusion that the user’s manual for a graphical programming system should be a program—or that the system should be its own user’s manual.”

Anticipating the first on-screen tutorial by some years, Licklider saw that the central problem that lay before his generation of programmers was not teaching people to spit out ever-more-sophisticated tangles of code, but teaching computers to talk more like people—or maybe like monkeys. Bright colors, pretty pictures, objects that could be grasped and moved: it was user-friendliness, not complexity, that would truly unleash the explosive power of digital information.

Comments

0 Comments