Aug 21, 2015 · 7 minutes

Silicon Valley has memory issues.

That may be because it is part of another idea, California, with a similar condition. Per Joan Didion, 1966: “The future always looks good in the golden land because no one remembers the past.”

These days the problem is compounded by the very technology we use to interface with the world, technology which, by and large, has emerged from the womb of the California idea. First, because the Web is “a land of the perpetual present,” as Internet Archive founder Brewster Kahle notes in a recent essay. Though we may consider the lives we live out on the Web to have an eerie permanence, that permanence is actually unreliable and incomplete.

Second, the nature of pre-Web information creates a rift with past. These days, information is born on the Web, whereas information born before is harvested from a separate reality in order to appear there. Today, when an event happens, it is quickly captured in the Web amber in a variety of media, whereas pre-Web events exist on the Web piecemeal. What results is a sort of  technologically-bound perception, a new take on an older phenomenon by which my perception of World War II is largely in black-and-white, whereas Vietnam is in color. And certain attributes may be left out even now, unfit for the medium. A poster in the Electronic Frontier Foundation offices reads “The Revolution Will Not Be on Twitter.”

Lastly, novelty is currency in Silicon Valley, which has appropriated for itself the word “innovation.” The problem of Silicon Valley memory-loss might also be a result of willful ignorance, rigging the pursuit of the unprecedented.

And so we forget, and a picture emerges of the current architecture of digital life as sui generis, spontaneously generated sometime in the early ‘90s, wrested from unbeing by a cohort of intrepid entrepreneurs like Larry Page, Steve Jobs, Marc Andreessen [Disclosure: A Pando investor]…

Enter “Machines of Loving Grace: the Quest for Common Ground Between Humans and Robots,” the latest book from New York Times science reporter John Markoff. Much as his last book, “What the Dormouse Said” (2005) did for the Engelbartian tradition of intelligence augmentation (IA), the new tome sets out to plot the history of the dream, design and development of Artificial Intelligence.

The book, out on August 25 from HarperCollins, arrives just in time to inform freshly renewed debates over the promise and consequences of recent improvements and immanent transcendences of AI. In this context, “Machines of Loving Grace” is as a pretty rare chunk of writing, as it explores the philosophical and ethical questions raised by AI but stops well short of hyperbole.

The book avoids diving too deeply into the technology itself. This keeps the prose from getting bogged down, but also at times glosses over too much, reading a bit like a textbook, albeit one with the smooth pacing and meter worthy of the New York Times.

Instead, the focus is on the lineage of designers and programmers who have devoted themselves to the quest for artificial intelligence, stretching back into the post-World War II era and continuing to the present day. A hefty portion of the book is devoted to looking back on these researchers, who were looking decades ahead. Markoff relies on a wide array of sources and his own interviews to flesh out these scientists’ motivations and ventures and influence upon one another.

As an encyclopedia of the various camps from the past 60-odd years of computer research, both this book and its predecessor should be required reading for laypeople interested in our collective tech backstory, especially so for that peculiar brand of layperson called tech journalists.

Markoff’s general thesis is that the parallel development of IA and AI has long represented a computer science community divided. He points to the early ‘60s as the time of the schism, manifest in the separate projects of John McCarthy at the Stanford Artificial Intelligence Laboratory and of Douglas Engelbart in the Augmentation Research Center at SRI, following the faultline through to the present day as typified by Andy Rubin’s assemblage of a “robot empire” at Google and Tom Gruber designing the software behind Apple’s Siri.

In an email, Markoff tells me, “There is a bit of a sequel aspect to MLG. It grew out of the observation that I made in Dormouse that McCarthy's lab (SAIL) and Engelbart's Lab (Augment) were founded at roughly the same time on either side of Stanford campus and they took opposite approaches -- McCarthy set out to replace the human with artificial intelligence and Engelbart worked to extend human intelligence with IA (Intelligence Augmentation). I realized that the two communities that grew from those two original laboratories worked largely in isolation from each other. Personal computing and the Internet grew from Engelbart's work and transformed the world between 1975 and today. AI initially failed, but is now starting to have an equally profound impact on the world.”

“The central topic of this book is the dichotomy and the paradox inherent in the work of the designers who alternatively augment and replace humans in the systems they build,” he writes in the preface. In the course of presenting the life and work of these designers, Markoff raises a number of questions about the relationship of humans to machines, demonstrating the way this relationship is baked in to the technologies we’ve come to use on a daily basis, and projecting into the advances of computer-vision, machine-learning, robotics, artificial neural nets, augmented reality and natural language processing. Will intelligent computers finally begin colonizing the “knowledge economy” workforce, as has long been predicted? Will human operators be kept “in the loop”? To what extent will we cede decision-making authority to our creations? How much have we ceded already?

Markoff is a staunch anti-determinist, and believes the answers to these questions depend entirely on the values of those designing the current wave of AI.

The book traces these underlying concerns back to thinkers at the dawn of the computing age, and tracks them through subsequent generations of computer folk. Markoff documents the waves of enthusiasm, anxiety, prediction and disappointment that have accompanied the idea of Artificial Intelligence vis-a-vis computers since its first conception.

His chronicling of the correspondence between renegade mathematician Norbert Wiener and United Auto Workers leader Walter Reuther in the early ‘50s is especially interesting, as are his accounts of the “conversions” of AI designers such as Gary Bradski, Terry Winograd and Bill Duvall to more human-centered pursuits.

Ultimately, Markoff concludes that AI will continue to make inroads in economic life, and that robots and ever-smarter machines will become commonplace in the near future. “We will soon be living – either comfortably or uncomfortably – with autonomous machines,” he writes. This observation usually causes the media to shit its collective pants, but Markoff stares this nightmare serenely and bloodlessly in its face, bowels in order.

The human costs and consequences of this development, he explains, can range from dire to subtle to beneficent, depending entirely on how the systems are intended, and so designed. Markoff's hopes for the best-case scenario rest on a convergence of the two computer science communities (AI and IA).

This point is subdued in tone, but flies in the face both of the celebratory esteem for intelligent machines found in the writing of Ray Kurzweil or in Kevin Kelly’s “What Technology Wants,” and also against the countervailing angst of AI doomsayers. As a calm and deliberate presentation of the history of AI, the ethical considerations involved, and the current branches of its research, the book has arrived at just the right time.

This probably shouldn’t be surprising. Markoff’s first book, 1985’s “the High Cost of High Tech” (written with former activist and current Mountain View City Councilmember, Lenny Siegel) was in time to spell out many of the concerns around privacy, automation and surveillance that would emerge with the mainstreaming of the Internet. He is credited with, in 1993, being the first reporter to write about the World Wide Web, well ahead of the tech journalism curve. In 2005, with “Dormouse,” he presented a detailed history of the augmentation technologies that would soon find expression in the smartphone.

The arrival, then, of “Machines of Loving Grace” makes a strong case that AI, in its many forms, is the next big thing. If it is, the book will serve future generations in a fashion familiar to science fiction fans: a dispassionate telling of “how we got here” – like the report of the AI Continuity on the history that spawned Wintermute and Neuromancer in William Gibson’s “Mona Lisa Smile,” or the invented academic literature on the history of planetary exploration at the beginning of Stanislaw Lem’s “Solaris.” That is to say, this book is a device, a tool in the form of external memory that can move the plot forward by catching its readers up.