Archive

Gordon Moore Is the Nerdy Moses of Silicon Valley

Lawgiver

Gordon Moore’s work with the miniaturization of silicon chips has made him one of the gods of the digital universe, but there’s a darker side to the man and the culture he epitomizes.

articles/2015/09/24/gordon-moore-is-the-nerdy-moses-of-silicon-valley/150924-OMalley-moore-tease_indlle
Paul Sakuma/AP

Back in 1999 Ray Kurzweil, currently director of engineering at Google, published The Age of Spiritual Machines: When Computers Exceed Human Intelligence.

Technology, Kurzweil wrote, “is the continuation of evolution by other means, and is itself an evolutionary process.”

Other futurists, like Elon Musk, also believe this evolution of computing hardware will eventually lead to the Technological Singularity: an era in which artificial intelligence will supplant human intelligence, bringing about a new civilization where machines will become masters of the human race.

ADVERTISEMENT

When attempting to figure out when this hypothetical era will begin, futurists always start from the basic premise of Moore’s Law: the observation first made by Gordon Moore in 1965 that throughout the history of computing hardware the number of silicon transistors in a densely integrated circuit has doubled every couple of years even as the cost has tumbled. Put another way: As the cost of silicon transistors has become increasingly cheaper, technological speed has advanced exponentially.

But what exactly does Moore’s Law signify if we want to understand the relationship between mankind and technology? And just how significant is the economic component that’s always been so intrinsic to the law itself?

In Moore’s Law, authors Arnold Thackray, David C. Brock, and Rachel Jones address these very pertinent questions only tangentially. Their primary aim is to understand Gordon Moore himself: the intellect, the scientist, the entrepreneur, and the man.

Here’s what we find out: Moore is a diligent workhorse, a typical technocrat, an obsessive timekeeper, terribly shy, deeply conservative, and shows little emotion to his family.

But apart from consistently singing Moore’s praises, the book lacks a distinctive convincing argument.

The authors must be given some praise, though, particularly for their lucid explanation of the science that enabled Moore and his colleagues to elevate the transistor from an exotic item of military hardware into one of the essential ingredients of modern computing.

Moore began his working life as a chemist, taking a job in 1956 with the semiconductor manufacturer Fairchild, in Mountain View, California. His boss was William Shockley, who happened to have won the Noble Prize for physics that same year.

Moore’s work at Fairchild would see him mass-producing a novel silicon version of the transistor, which promised unprecedented standards of reliability for electronic devices.

At Fairchild, a team in Moore’s laboratory had already begun to create a landmark invention: the silicon integrated circuit. More commonly known as the microchip, this was an entire electronic circuit built from a host of transistors that were chemically printed onto a single sliver of silicon. It was here, Moore saw, that the future of technology lay.

By 1976 Moore would boldly make the following claim: “We are bringing about the next great revolution in the history of mankind—the transition to the electronic age.”

Intel, the company he founded, along with Robert Noyce and Andrew Grove in 1968, would become the world’s most successful semiconductor manufacturer.

When Moore stepped down in 1997 as CEO, Intel was in the top 50 companies of the Fortune 500, having made record profits during the late ’80s and early ’90s. At that stage, he was sitting on a personal fortune in excess of $20 billion.

Moore's legacy in silicon electronics, the authors explain, “is clear and unchallenged.” Moore’s Law, we are told, “has [revolutionized] the realities of being human.”

Put simply, they say, “Gordon Moore is a good man ... who [focused] on measuring, analyzing, and then making investment decisions [that] fitted right into an American ethos of private capital, personal property, and financial markets.”

The authors spend hundreds of pages delineating the vast profits both Intel and Moore have amassed over the last few decades. They pay scant attention, however, to the following question: Has the development of computers since the mid-’60s— with Moore’s Law as their guiding principle—been held back by a culture where the primary function is always commercial profit?

And, more important, has the imagination of this technology been stifled and monopolized in that process?

Even right-wing venture capitalists answer yes when asked this question. As Peter Thiel, founder of Paypal and an early investor in Facebook, recently declared, summing up what he sees as the disappointment of technological progress: “We wanted flying cars, instead we got 140 characters.”

In his book Zero To One, Thiel describes how the progress of drug discoveries in the last 50 years in the United States has traveled in the opposite direction that Moore’s Law has. More commonly known as Eroom’s Law—Moore’s spelled backwards—this is an observation on how the number of new drugs approved per billion dollars spent on R&D has halved in the U.S. since 1950.

Because information technology has accelerated far quicker during this period than at any time in human history, Thiel poses the following question for the biotech industry: Will it ever see similar progress again?

Readers looking for answers to this question—as well as a more thorough analysis of where Moore’s Law has taken late-Western-capitalist values during the last 50 years—will need to consult more philosophical thinkers on the subject.

Jaron Lanier, who worked in Silicon Valley during the ’80s, has continually critiqued the corporate-religious-like fever that has gripped the tech world since the ’70s.

In books like Who Owns The Future and You Are Not a Gadget, Lanier argues that because computer networking got so cheap, the financial sector has grown fantastically in proportion to the rest of the economy, leaving other industries for dead in the process.

Moore’s Law—which Lanier says is Silicon Valley’s 10 Commandments all wrapped into one— still has a mysterious question that nobody has really figured out the answer to: What exactly does it do?

Is it human-driven, a self-fulfilling prophecy, or an intrinsic inevitable quality of technology itself? Whatever the reason, Lanier believes Moore’s Law today leads to a religious-like emotion in some of the most influential tech circles in Silicon Valley.

But this critique of how technology has interacted with late capitalist principles to create unprecedented inequality has been the subject of left-leaning academics for decades.

Many of these ambitious, if slightly naïve Marxist thinkers, during the ’70s and ’80s imagined that computers would deliver us into a world of nano technology and nuclear energy where robots would reduce the toil of industrial labor.

Even as far back as the ’30s, people like progressive economist John Maynard Keynes imagined a world where technology would deliver more leisure time for everyone and reduce working hours considerably.

Unfortunately, the opposite happened: As technology has gotten increasingly more sophisticated, working hours have paradoxically become longer.

Our collective fascination in the West with the mythic origins of Silicon Valley may help to explain why we’ve become so blinded to how technological progress has been purposely stifled by conservative forces since the ’70s.

Think of Steve Jobs, for example, in his early 20s, tripping on LSD in his garage, imagining himself as a creative genius on the road to global tech dominance. A strikingly cool, iconic, and cultish image, it makes tech geeks look like they operate outside of the standard conservative forces of neo-liberalism.

The reality, however, is more complex and sinister.

David Graeber, author of The Utopia of Rules: On Technology, Stupidity and the Joys of Bureaucracy, says that conservatives set up think tanks in Chicago during the ’70s to ensure a utopian vision of technology that empowered workers rights could never evolve.

As Graeber and others have pointed out, instead of robotizing the factories, the capitalist and ruling class simply moved the production lines overseas.

One doesn’t have to look too far in Moore’s Law to see evidence of this: On one page we see Gordon Moore visiting an Intel plant in Penang in the ’70s, the decade in which Intel began relocating many of its factories overseas, ensuring that they always paid the least amount of income tax possible.

The giant corporation was a massive trendsetter for other Silicon Valley companies such as Ebay, Google, and Facebook, who have followed this path of relocation to make unprecedented tax-free profits.

And what about the relationship between the tech world and the environment?

Again, this is an issue that needs to be questioned with a lot more vigor than it gets in this book.

The authors explain how Fairchild, and other technology companies in California, disposed of highly corrosive acid and other poisonous waste for many years by simply pouring it down the drain. They then excuse Moore’s culpability by claiming that he “was unaware of these issues.”

This might be plausible if Moore actually had a good track record over his career of being a concerned environmentalist. In Boiling Frogs: Intel vs The Village, Barbara Rockwell describes how Intel transformed a sleepy, pollution-free suburb of Albuquerque, New Mexico, into a work site where noxious odors soon became a serious health hazard to the community.

Tired of breathing in solvent fumes from Intel’s commercial activity there, Rockwell and 30 others formed The Corrales Residents for Clean Air and Water. The group initially won a victory. By 2000, however, the corporation, with a bigger legal team and far more cash behind it than the local community group, was allowed to continue polluting the surrounding environment without further interference.

It is now 50 years since Gordon Moore first declared that the cheaper the silicon becomes, the faster the technology can progress. But what direction is Moore’s Law heading toward as computers continue to dominate nearly every aspect of human existence?

In the closing pages of Moore’s Law, all three authors ask: How much time is left until this seminal rule of the tech world reaches its final apotheosis? Because Moore’s Law has been such an intrinsic component of technological progress over the last 50 years, all of them believe that even inside experts in Silicon Valley aren't quite sure what the answer to this question is.

Could Silicon Valley, like Detroit before it, once the bottom falls out of the commercial market, suddenly turn into a rust belt, where economic vibrancy will simply collapse? And how will industries and enterprises who are dependent upon, and accustomed to, Moore's Law, continue to thrive?

Tech optimists and AI enthusiasts believe that if we can accelerate evolution in machines quick enough, computers can eventually surpass human intelligence.

George Zarkadakis, who recently published In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, believes dystopian ideals such as the Technological Singularity may just be a little too histrionic to practically materialize.

The reason computers evolve in the way they currently do is based entirely on Moore’s Law. But Zarkadakis argues that the law has already ended because nature has limits: One cannot infinitely miniaturize transistors.

Another scenario where Moore’s Law would become obsolete, but where computers could still advance, according to Zarkadakis, is to start exploring new technology that doesn’t have a separation between software and hardware. These machines would not be coded, and they would resemble the human brain.

This would be an interesting shift in computer science, and could lead technology down new avenues, where the end result would be a neuromorphic computer: This could potentially have sensory experience to the outside world.

For AI enthusiasts like Kurzweil and Musk, technology is given a religious-like status, where an invisible force already has a predetermined grand plan.

For people like Lanier and Zarkadakis, however, controlling machines and robots with laws and common global values is the best way to embrace technology, while still using it specifically to our advantage. If civil society can prevent existential threats like nuclear war, with law-abiding global cooperation, then surely the development of computers can be given the same treatment: For example, treaties could be signed between nation-states to limit developing technology that poses any imminent threat to the overall well-being of the human race.

Perhaps, then, we need to start discarding dystopian literary metaphors whenever we begin thinking about our own future in relation to technological progress and development.

Whatever role Moore’s Law plays in the coming 50 years, though, one thing is almost a certainty: Any society that depends entirely on maximizing profits above all else is doomed to fail.

So maybe practical day-to-day responsible decision-making is what is needed from government in abundance. Because stories we are being fed with apocalyptic doomsday science fiction endings, where the robots end up enslaving us perpetually, is merely a distraction from a more sober reality.

A form of slavery is already happening in the tech world. But it isn’t between man and machine, but between employers and employees, and between so-called too-big-to-fail global corporations and helpless individuals who are wielding less and less economic power with each passing day as technology continually hollows out and erodes what was traditionally known as the middle class.

Attempting to turn back the clock on this drastic culture of inequality that has snowballed in the last 50 years with the advent of the silicon revolution—which Gordon Moore was at the helm of—may be the greatest challenge we face yet.

Got a tip? Send it to The Daily Beast here.