3



The Room-Sized Brain

If you were searching for the origins of modern computing, you could do worse than to start here: on Walnut Hill, just north and west of Boston, in 1912, where an overdressed lawnmower man was trudging up a grassy incline behind his machine. He took a moment to pose for a grainy photo, hands on the tiller, eyes on his work, face turned from us; the white of the grass, the black of his two-piece suit, the black of the machine. You’d deduce in a second, of course, that its purpose is something stranger than lawn care: the tall grass is untouched, and where there ought to be blades there is a blank box, riding slung between two bicycle wheels.

It was the failed first invention of a college senior, and though it ran just as promised, it bored nearly everyone beyond its twenty-two-year-old creator. Inside the box hung a pendulum, and a disc powered by the back bicycle wheel. Resting on the disc were two rollers: one measured vertical distance and wielded a pen, one measured horizontal distance and turned the drum of paper beneath. It was a geography machine, a device aimed to put land surveying teams out of business. Using the old method, heavy on telescopes and trigonometry, three men could cover three miles of ground per day, and at day’s end, they’d have tables of data to convert into a cross-section picture of the land over which they’d slogged. The college senior claimed that he, working alone, could nearly triple their speed—and he did it by skipping straight to the picture. Inside the body of his Profile Tracer was the lay of the land spooled on a rolling drum, drawn in ink by a machine so accurate that if “it ran over a manhole cover, it would duly plot the little bump.”

It earned a patent, and simultaneous bachelor’s and master’s degrees for its creator, but little else. He made the corporate rounds and failed to sell a single one, or even the license for the patent—his cold letters unanswered, his pitch meetings over in minutes. And even if he could have said, in a hypothetical moment of awesome clairvoyance, “Look, in twenty years the guts of this lawnmower will run the most powerful thinking machine that human hands have ever built”—it would have sounded close to gibberish. But it would also have been true.

The man in the black suit is Vannevar Bush, and this photo marks his start. Pugnacious and perpetually time-strapped, grandson and great-grandson of Yankee whaling captains, saddled with a name so frustratingly hard to pronounce that he would instruct others to call him “Van” or even “John”—the twenty-two-year-old inventor would one day be, although he couldn’t possibly imagine it yet, the most powerful scientist in America.

He would preside over a custom-made brain the size of a room. He’d counsel presidents. He’d direct the nation’s scientists through World War II with the same brusqueness with which he once imagined unemploying two-thirds of the surveying profession. Collier’s magazine would call him “the man who may win or lose the war”; Time, “the general of physics.”

And not least among these accomplishments would be this: he’d be the first to see Claude Shannon for who he was.


“Suppose,” said Vannevar Bush—two decades older, now a doctor of engineering and vice president of MIT—“an apple drops from a tree.” It’s just as well that he started with an example from the high school physics chalkboard. Mathematically speaking, he was a man of only moderate brain, “fourth or fifth echelon” by his own admission. But he was blessed with brilliant hands. He’d been—like Claude Shannon, his greatest student—a basement tinkerer from his earliest memories. Much of his adult life was spent, it turned out, building dogged, untiring mathematical brains of wood and metal, brains that in some ways far outclassed his own—and that would ultimately be the scene of Shannon’s first breakthrough.

“The thing we know about that apple,” Vannevar Bush continued, “is, to a first approximation, that its acceleration is constant.” We can plot its fall on the chalkboard in seconds. “But suppose we want to include the resistance that air offers to the fall. This just puts another term in our equation but makes it hard to solve formally. We can still very readily solve it on a machine. We simply connect together elements, electrical or mechanical gadgets, that represent the terms of the equation, and watch it perform.”

What is it about an apple in a physicist’s vacuum that needs only pencil and paper, while an apple falling through the air of the real world demands solution by gadget? Both falls, as Bush noted, can be captured in differential equations—the equations at the heart of calculus that represent continuous change. So first imagine the apple falling on the head of, say, Isaac Newton (and it’s no coincidence that the man who formulated the laws of gravitation also co-invented calculus—without equations that capture change over time, there’s no making sense of gravity). In a vacuum, the apple falls 9.8 meters per second faster, each second, until it concusses Newton.

But now drop the apple on Newton in the open air. Gravity’s force, of course, doesn’t change. But the faster the apple falls, the greater the resistance of the air pushing back against it. The apple’s acceleration now depends on both the gravity speeding it up and the air resistance slowing it down, which in turn depends on the apple’s speed at any moment, which in turn is changing every fraction of a second. That is the kind of problem that calls for a more-than-ordinary brain.

How fast can a population of animals grow before it crashes? How long before a heap of radioactive uranium decays? How far does a magnet’s force extend? How much does a massive sun curve time and space? To ask any of these questions is to ask for the solution to a differential equation.

Or, of special interest to Bush and his electrical engineering colleagues: How great a power surge could the nation’s electrical grids tolerate before they failed? Given all the wealth and work it had taken to electrify America, it was a multimillion-dollar question. In the 1920s, reflected one of Bush’s graduate students, transmitting power from state to state was like “the towing of one car by another with a long elastic cable stretched almost to the breaking point. Under these conditions, any mishap, such as a short circuit or a sudden adding of load, would in effect snap the towing cable.” By 1926, engineers had discovered the equations that could predict the cable’s snapping point. The catch was that solving these power equations meant a long and error-prone slog: doing calculus by hand, graphing the results by hand, finding the area covered by the graphs by tracing their outline with a rolling mathematician’s tool called a planimeter, and then inserting the area figures into further equations—all of which meant that the lights would flicker and die long before the work was done.

It turned out that most differential equations of the useful kind—the apple-falling-in-the-real-world kind, not the apple-falling-down-a-chalkboard kind—presented just the same impassable problem. These were not equations that could be solved by formulas or shortcuts, only by trial and error, or intuition, or luck. To solve them reliably—to bring the force of calculus to bear on the industrial problems of power transmission or telephone networks, or on the advanced physics problems of cosmic rays and subatomic particles—demanded an intelligence of another order.


By the time Bush and his students set to work, scientists had been after such a brain for two generations. Long before it was needed to stabilize the electrical grid, it was sought for a much more ancient problem: predicting the ocean tides. For sailors, tide knowledge dictated when to come into harbor, where to fish, and even when to launch invasions. If little fishing boats could rely on guesswork and memory, the iron-sided, steam-belching ships of the nineteenth century required something more precise. And there was no precision to be had in simply looking at high-tide marks and waiting for the sea to repeat itself, because the simple model of Newton’s airless world—moon and sun tugging on the water at each day’s appointed times—fell into seeming chaos when confronted with the reality of each shoreline’s special shape and each seabed’s unseen slope. From the God’s-eye view, there is a law of tides; from our earthbound view, only some petty local ordinances.

But a half century after Newton, mathematicians found that the most chaotic-seeming fluctuations—from stock prices to tide charts—could be broken down and represented as the sum of much simpler functions, wavelike patterns that did indeed repeat themselves. Anarchy concealed order; or rather, anarchy was dozens of kinds of order happening at once, all shouting to be heard over one another. So how to find the order in the tides?

In 1876, the wizard-bearded Scots-Irish physicist William Thomson—later ennobled as Lord Kelvin, a name he took from the river that flowed by his laboratory—proposed to do it by machine. At Thomson’s Cambridge graduation exam, the professor questioning him leaned over to his colleague and whispered, “You and I are just about fit to mend his pens.” From his days in school, he’d kept as a personal motto some lines from Alexander Pope: “Go, wondrous creature! mount where Science guides; / Go measure earth, weigh air, and state the tides.” And while the poet surely meant to speak to Man in the aggregate, Thomson could hardly be blamed if he ever imagined that he himself was the creature addressed.

Thomson’s tidal solution was something like the inverse of Bush’s lawnmower. The surveying machine would read the land’s data of hills and dips and even manhole covers and output a graph; the tide machine invented by Thomson and his brother, which they christened the harmonic analyzer, took a graph as input. The operator stood before a long, open wooden box resting on eight legs, a steel pointer and a hand crank protruding from its innards. With his right hand, he took hold of the pointer and traced a graph of water levels, months’ data on high tides and low; with his left, he steadily turned the crank that turned the oiled gears in the casket. Inside, eleven little cranks rotated at their own speeds, each isolating one of the simple functions that added up to the chaotic tide. At the end, their gauges displayed eleven little numbers—the average water level, the pull of the moon, the pull of the sun, and so on—that together filled in the equation to state the tides. All of it, in principle, could be ground out by human hands on a notepad—but, said Thomson, this was “calculation of so methodical a kind that a machine ought to be found to do it.”

And so it had been. With the equation extracted from the surf, a tide table was no longer just a record of the past, but a promise of the future. Draw the table as a graph; feed the graph into the harmonic analyzer; and finally, use the analyzer’s readings to custom-rig Thomson’s next invention, a fifteen-pulley mechanical calculator the size of a wardrobe that drew, with pen and ink, its own graph of tide levels for the year to come. In 1876, the tide predictor could accurately draw a year’s worth of the future in four hours; by 1881, it took twenty-five minutes.

It was politely received and politely shunted aside. Even in 1881, few practical equations were susceptible to mechanical solution, so it seemed wiser to go on paying pencil-pushers than to mass-produce a device with such a limited scope. Perhaps, too, Thomson’s fellow mathematicians took offense at the thought that any part of their work could be automated as easily as the labor of a factory hand. Most important, though Thomson conceived of a truly versatile problem-solving machine, the crucial component was effectively missing until world war brought a new impetus to the search.

So imagine, now, not a ship coming into harbor with the tide, but a dreadnought rolling on a choppy sea, readying its guns to lob an explosive shell at a moving target more than ten miles over the horizon. Imagine a sea battle between two floating arsenals that would remain, until the very end, mutually invisible. At that distance, the pitch of the waves, the density of the air at each level of the projectile’s trajectory, the curvature of the earth, even the earth’s rotation during the time of the shell’s flight would all conspire to determine whether that shell would hit water or steel. Each of those factors formed a variable in—again—a differential equation. A naval battle at that range was not simply a gunfight, but a mathematical race (in which the reward for second place was often a watery grave). In the First World War’s largest naval engagement, the Battle of Jutland, in 1916, every British ship but one steamed into battle with human-directed guns. They hit only 3 percent of their targets; they lost more than 6,000 men. With those stakes, a reliable equation-solving machine was suddenly worth the cost.

It was Hannibal Ford, a mechanical engineer from upstate New York, who supplied Thomson’s missing part. He’d gotten his start taking apart watches and clocks, and then moved on to work on typewriters. Where Thomson chose as his college watchword an heroic couplet from Pope, Ford’s page in the Cornell yearbook had an earthier motto: “I would construct a machine to do any old thing in any old way.” The machine he had constructed by 1917 automated a key step in the solution of differential equations: it found integrals, or the area under curves (including the curve of a shell in flight). Long before electronics, it could all be done mechanically. In the case of Ford’s integrator—nicknamed the “Baby Ford” by grateful American sailors—two ball bearings rested on the surface of a flat, spinning disc. They were free to move continuously across the disc’s surface: the farther from the center, the faster they’d spin. The distance from the center stood for the shape of the equation’s curve, and the speed of their spinning stood for the answer. The ball bearings turned a cylinder that powered the rest of the machine and transmitted the answer, through gears and gauges, to the gunners. Given inputs including the speed and course of the attacking ship and the enemy ship, the Baby Ford would generate the range to the target, the direction for fire, and the time the shell would be in the air. These, in turn, would dictate the angle of the guns.

Hannibal Ford was not the first to imagine such a machine. But his machine was among the first to find integrals reliably, let alone belowdecks on a ship tossed by waves and shaken by exploding shells, when a slip of a ball bearing from its orbit would send the crew back to the days of spyglasses and intuition. It was, said Vannevar Bush, “a marvel of precision and completeness.” Soon Bush would run six of them at once—and he’d set them to search not just for the pitch of a gun, but for the shapes of atoms and the structures of suns.


Thomson’s harmonic analyzer, Ford’s integrator, Bush’s Profile Tracer: conceived in isolation from one another, single-purpose machines built to answer only one specialized question apiece, they still had a crucial quality in common. They were all working models of the physical world—of the slope of a hill or the fall of a shell—simplified down to the essence. They were all, in a way, bare-bones miniatures of the processes they described; they were, in other words, resolutely analog. But it was Vannevar Bush who brought analog computing to its highest level, a machine for all purposes, a landmark on the way from tool to brain. And it was Claude Shannon who, in a genius accident, helped obsolete it.

Bush would later recognize his computer’s precedents in Thomson and Ford. But when he first set to work in the mid-1920s, searching for a way to shrink America’s power network to the size of his lab, he was largely ignorant of his analog forebears. Where, then, did he start?

In a sense, he started as a teacher. As well as an inventor, Bush was an instructor of young engineers at a time when MIT’s electrical engineering department was coming into national prominence. Fall in Cambridge, Massachusetts, would begin with an auditorium full of bright freshmen, slacks pressed and hair freshly combed, sitting stunned as Bush punctured their self-regard. He would rise at the lectern, hold up a simple pipe wrench, and offer a simple challenge: “Describe this.”

One by one the freshmen would take their shot, and one by one their descriptions were dismantled: Bush would show how each definition was so vague that it could apply to any number of wrenches, not this wrench in front of them. And he would conclude by reading out the exact and correct patent application:

By turning the nut to the right or left the movable jaw may be moved either toward or away from the fixed jaw, as may be desirable. The inner face of the movable jaw is formed at a right angle to its shank, and is also provided with a series of teeth, which pitch or rake on its fellow jaw. . . . The sliding or movable jaw [may] be projected outward so as to stand at an outward inclination with respect to the other jaw, in order to enable the jaws to be readily applied to a pipe. . . .

And so on.

The point was precision. In particular, the point was rigor in reducing the hard, solid world—the wrench—into symbols so exact—the patent application—that one could be flawlessly translated from the other. Given the pipe wrench, produce the words for that wrench and no other; given the words, produce the wrench. That, Bush taught his students, was the beginning of engineering.

For the same reason—rigor in symbolizing the world—every engineer was taught to draw. Leave pure numbers for pure mathematicians—engineers would learn math with their hands. “A man learns to use the Calculus as he learns to use the chisel or the file,” said one reformer who helped give engineering education its practical bent in the early century. A math laboratory of that era was “well-stocked with clay, cardboard, wire, wooden, metal and other models and materials”—and with graph paper, which was only about as old as Bush was. At Bush’s MIT, math and engineering were an extension of the metal shop and the woodshop, and students who were skilled with the planimeter and the slide rule had to be skilled as well with the soldering iron and the saw. There is perhaps a source here for engineers’ persistent status anxiety, “uncertain always where they fit,” as the great critic Paul Fussell put it, “whether with boss or worker, management or labor, the world of headwork or the world of handwork.” But there was also the conviction that handwork was headwork, as long as the translations had precision. Given precision, an equation could be grasped and solved in pictures and motion, just as a wrench could be pinned down by the right words.

Working with a mechanic to build his early analog computers, Bush came to see just how thoroughly calculus could be learned by hand: “He had learned the calculus in mechanical terms,” Bush explained, “a strange approach, and yet he understood it. That is, he did not understand it in any formal sense, but he understood the fundamentals; he had it under his skin.”

In the whirring of their integrators and the turning of their gears, Bush’s machines embodied calculus. Like good engineers, they took drawings as input and gave drawings as output. They might have happened anywhere—but it’s hardly surprising that they were pieced together in an engineering department.


By 1924, Bush and his students had built an integrating machine that improved on Ford’s. By 1928, in search of the solution to a stable grid, they were able to model 200 miles of power lines in a fifty-square-foot lab. The same year, work started on an all-purpose analog computer: the differential analyzer. When it was finished, three years and $25,000 later, it was a brain the size of a room, a metal calculus machine that could whir away at a problem for days and nights on end before it ground to a halt. One problem, which measured the effects of the earth’s magnetic field on cosmic rays, took thirty weeks of spinning gears—but when it was done, the differential analyzer had solved, by brute force, equations so complex that even trying to attack them with human brainpower would have been pointless. Indeed, Bush’s lab now owned the computational power to turn from the problems of industry to some of the fundamental questions of physics.

It was a fearsome thing of shafts, gears, strings, and wheels rolling on disks,” said an MIT physicist who turned to the differential analyzer to study the behavior of scattering electrons, “but it worked.” It was an enormous wooden frame latticed with spinning rods, resembling a giant’s 100-ton foosball set. At the input end were six draftsman’s tables, where the machine read the equations it was to evaluate, much like Thomson’s analyzer read a graph of the tides. The operators turned hand cranks that sent the machine’s pointers over a hand-drawn graph of the equation to be analyzed: “for example,” read one contemporary account, “in calculating the scattering of electrons by an atom, it is necessary to supply the machine with the relation between the potential of the atomic field and the distance from the centre of the atom.” In this way, the details of the equation were communicated to the machine’s internal shafts. Each shaft represented a variable (the current in a power line, or the size of an atomic nucleus); the greater the variable, the faster the shaft spun. These, in turn, drove integrators like Ford’s: A flat disc spun in place, and standing perpendicular on the disc was an integrating wheel. The farther the operators had placed the wheel from the center of the disc, the faster it turned. The wheel was linked to five more integrators of identical construction. At the very end, the speed of the integrating wheels drove a pencil that moved up and down as the graph paper underneath it unwound at a continuous rate. The question was a graph, and at the end, after days or even months of revolutions, the answering graph appeared.

The mathematics were infinitely more complex—but Vannevar Bush’s lawnmower might have recognized in this calculating room a distant descendant. The differential analyzer, wrote one science historian, “still interpreted mathematics in terms of mechanical rotations, still depended on expertly machined wheel-and-disc integrators, and still drew its answers as curves. Differential equations and contours of elevation—Bush’s computers were very much the offspring of the early Profile Tracer.”

This was the computer before the digital revolution: a machine that literally performed equations in the process of solving them. As long as the machine was acting out the equations that shape an atom, it was, in a meaningful sense, a giant atom; as long as it was acting out the equations that fuel a star, it was a miniature star. “It is an analogue machine,” said Bush. “When one has a problem before him, say the problem of how a bridge that has not been built will sway in a gusty wind, he proceeds to make a combination of mechanical or electrical elements which will act in exactly the same manner as the bridge—that is, will obey the same differential equations.” For the physicist or engineer, two systems that obey the same equations have a kind of identity—or at least an analogy. And that, after all, is all our word analog means. A digital watch is nothing like the sun; an analog watch is the memory of a shadow’s circuit around a dial.

The computer clacked and hummed and scribbled away, spinning out its analogies, and when it ran through the night, Bush’s students kept watch by its side in shifts, ears tuned for the sound of a wheel slipping its orbit. On the nights when it all ran smoothly, they struggled to stay awake in the humming room. And so passed five years.