The Myth of Basic Science

Does scientific research drive innovation? Not very often, argues Matt Ridley: Technological evolution has a momentum of its own, and it has little to do with the abstractions of the lab


Isaac Newton (1642-1727) uses a prism to separate white light into the colors of the spectrum, watched by his Cambridge University roommate John Wickins, right, as rendered in an 1874 print.

Isaac Newton (1642-1727) uses a prism to separate white light into the colors of the spectrum, watched by his Cambridge University roommate John Wickins, right, as rendered in an 1874 print. Photo: Ann Ronan Pictures/Print Collector/Getty Images

Innovation is a mysteriously difficult thing to dictate. Technology seems to change by a sort of inexorable, evolutionary progress, which we probably cannot stop- or speed up much either.And it’s not much the product of science. Most technological break-throughs come from technologists tinkering, not from researchers chasing hypothe-ses. Heretical as it may sound, “basic science” isn’t nearly as productive of new inventions as we tend to think.

Suppose Thomas Edison had died of an electric shock before thinking up the light bulb. Would history have been radically different? Of course not. No fewer than 23 people deserve the credit for inventing some version of the incandescent bulb before Edison, according to a history of the invention written by Robert Friedel, Paul Israel and Bernard Finn.


The same is true of other inventions. Elisha Gray and Alexander Graham Bell filed for a patent on the telephone on the very same day.By the time Google came along in 1996, there were already scores of search engines. As Kevin Kelly documents in his book “What Technology Wants", we know of six different inventors of the thermome-ter, three of the hypodermic needle, four of vaccination, five of the electric telegraph, four of photography, five of the steamboat, six of the electric railroad. The history of inventions, writes the historian Alfred Kroeber, is “one endless chain of parallel instances.”

It is just as true in science as in technology. Boyle’s law in English-speaking count- ries is the same thing as Mariotte’s Law in French-speaking countries. Isaac Newton vented paroxysms of fury at Gottfried Leibniz for claiming, correctly, to have invented the calculus independently. Charles Darwin was prodded into publishing his theory at last by Alfred Russel Wallace, who had precisely the same idea after reading precisely the same book, Malthus’s “Essay on Population.”

Increasingly, technology is developing the kind of autonomy that hitherto characte- rized biological entities. The Stanford economist Brian Arthur argues that technolo- gy is self-organizing and can, in effect, reproduce and adapt to its environment. It thus qualifies as a living organism, at least in the sense that a coral reef is a living thing. Sure, it could not exist without animals (that is, people) to build and maintain it, but then that is true of a coral reef, too.

Advertisement

And who knows when this will no longer be true of technology, and it will build and maintain itself? To the science writer Kevin Kelly, the “technium” - his name for the evolving organism that our collective machinery comprises - is already “a very complex organism that often follows its own urges.” It “wants what every living system wants: to perpetuate itself.”

By 2010, the Internet had roughly as many hyperlinks as the brain has synapses. To-day, a significant proportion of the whispering in the cybersphere originates in pro-grams - for monitoring, algorithmic financial trading and other purposes - rather than in people. It is already virtually impossible to turn the Internet off.


The implications of this new way of seeing technology - as an autonomous, evolving entity that continues to progress whoever is in charge - are startling. People are pawns in a process. We ride rather than drive the innovation wave. Technology will find its inventors,rather than vice versa. Short of bumping off half the population, there is little that we can do to stop it from happening, and even that might not work.

Indeed, the history of technological prohibitions is revealing. The Ming Chinese pro- hibited large ships;the Shogun Japanese,firearms;the medieval Italians,silk-spinning; Americans in the 1920s, alcohol. Such prohibitions can last a long time - three centu-ries in the case of the Chinese and Japanese examples - but eventually they come to an end, so long as there is competition. Meanwhile, elsewhere in the world, these technologies continued to grow.

Today it is impossible to imagine software development coming to a halt. Some- where in the world,a nation will harbor programmers,however strongly, say, the U.N. tries to enforce a ban on software development. The idea is absurd, which makes my point.

It is easier to prohibit technological development in larger-scale technologies that re-quire big investments and national regulations. So,for example,Europe has fairly suc- cessfully maintained a de facto ban on genetic modification of crops for two decades in the name of the “precautionary principle” - the idea that any possibility of harm, however remote, should scuttle new technology - and it looks as if it may do the same for shale gas. But even here, there is no hope of stopping these technologies globally.

Elisha Gray and Alexander Graham Bell, pictured, filed for a patent on the telephone on the very same day.

Elisha Gray and Alexander Graham Bell,pictured,filed for a patent on the telephone on the very same day. Photo: AISA/Everett Collection


And if there is no stopping technology, perhaps there is no steering it either. In Mr. Kelly’s words, “the technium wants what evolution began.” Technological change is a far more spontaneous phenomenon than we realize. Out with the heroic, revolutio-nary story of the inventor, in with the inexorable, incremental, inevitable creep of innovation.

Simultaneous discovery and invention mean that both patents and Nobel Prizes are fundamentally unfair things. And indeed, it is rare for a Nobel Prize not to leave in its wake a train of bitterly disappointed individuals with very good cause to be bitterly disappointed.

Patents and copyright laws grant too much credit and reward to individuals and imply that technology evolves by jerks. Recall that the original rationale for granting patents was not to reward inventors with monopoly profits but to encourage them to share their inventions. certain amount of intellectual property law is plainly necessary to achieve this. But it has gone too far. Most patents are now as much about defending monopoly and deterring rivals as about sharing ideas. And that discourages innovation.

Even the most explicit paper or patent application fails to reveal nearly enough to help another to retrace the steps through the maze of possible experiments. One study of lasers found that blueprints and written reports were quite inadequate to help others copy a laser design: You had to go and talk to the people who had done it. So a patent often does not achieve the openness that it is supposed to but instead hinders progress.

The economist Edwin Mansfield of the University of Pennsylvania studied the deve- lopment of 48 chemical, pharmaceutical, electronic and machine goods in New England in the 1970s. He found that, on average, it cost 65% as much money and 70% as much time to copy products as to invent them. And this was among spe- cialists with technical expertise. So even with full freedom to copy, firms would still want to break new ground.Commercial companies do basic research because they know it enables them to acquire the tacit knowledge that assists further innovation.

Politicians believe that innovation can be turned on and off like a tap: You start with pure scientific insights, which then get translated into applied science,which in turn become useful technology. So what you must do, as a patriotic legislator, is to ensure that there is a ready supply of money to scientists on the top floor of their ivory towers, and lo and behold, technology will come clanking out of the pipe at the bottom of the tower.

This linear model of how science drives innovation and prosperity goes right back to Francis Bacon, the early 17th-century philosopher and statesman who urged Eng- land to catch up with the Portuguese in their use of science to drive discovery and commercial gain. Supposedly Prince Henry the Navigator in the 15th century had invested heavily in mapmaking, nautical skills and navigation, which resulted in the exploration of Africa and great gains from trade.That is what Bacon wanted to copy.

Yet recent scholarship has exposed this tale as a myth, or rather a piece of Prince Henry’s propaganda. Like most innovation, Portugal’s navigational advances came about by trial and error among sailors, not by speculation among astronomers and cartographers. If anything, the scientists were driven by the needs of the explorers rather than the other way around.

Terence Kealey, a biochemist turned economist,tells this story to illustrate how the linear dogma so prevalent in the world of science and politics — that science drives innovation, which drives commerce — is mostly wrong. It misunderstands where innovation comes from. Indeed, it generally gets it backward.

When you examine the history of innovation,you find,again and again,that scientific breakthroughs are the effect,not the cause,of technological change.It is no accident that astronomy blossomed in the wake of the age of exploration. The steam engine owed almost nothing to the science of thermodynamics, but the science of thermo-dynamics owed almost everything to the steam engine. The discovery of the struc- ture of DNA depended heavily on X-ray crystallography of biological molecules, a technique developed in the wool industry to try to improve textiles.

Technological advances are driven by practical men who tinkered until they had better machines;a bstract scientific rumination is the last thing they do. As Adam Smith, looking around the factories of 18th-century Scotland, reported in “The Wealth of Nations”: “A great part of the machines made use in manufactures…were originally the inventions of common workmen,” and many improvements had been made “by the ingenuity of the makers of the machines.”

It follows that there is less need for government to fund science:Industry will do this itself.Having made innovations,it will then pay for research into the principles behind them. Having invented the steam engine, it will pay for thermodynamics. This con- clusion of Mr. Kealey’s is so heretical as to be incomprehensible to most econo-mists, to say nothing of scientists themselves.

For more than a half century, it has been an article of faith that science would not get funded if government did not do it, and economic growth would not happen if science did not get funded by the taxpayer. It was the economist Robert Solow who demonstrated in 1957 that innovation in technology was the source of most economic growth — at least in societies that were not expanding their territory or growing their populations. It was his colleagues Richard Nelson and Kenneth Arrow who explained in 1959 and 1962, respectively, that government funding of science was necessary, because it is cheaper to copy others than to do original research.

“The problem with the papers of Nelson and Arrow”,writes Mr. Kealey,“was that they were theoretical, and one or two troublesome souls, on peering out of their econo-mists’ aeries, noted that in the real world, there did seem to be some privately fun- ded research happening.” He argues that there is still no empirical demonstration of the need for public funding of research and that the historical record suggests the opposite.


After all, in the late 19th and early 20th centuries, the U.S. and Britain made huge contributions to science with negligible public funding, while Germany and France, with hefty public funding, achieved no greater results either in science or in econo- mics. After World War II, the U.S. and Britain began to fund science heavily from the public purse. With the success of war science and of Soviet state funding that led to Sputnik, it seemed obvious that state funding must make a difference.

The true lesson — that Sputnik relied heavily on Robert Goddard’s work, which had been funded by the Guggenheims — could have gone the other way. Yet there was no growth dividend for Britain and America from this science-funding rush. Their economies grew no faster than they had before.

In 2003, the Organization for Economic Cooperation and Development published a paper on the “sources of economic growth in OECD countries” between 1971 and 1998 and found, to its surprise, that whereas privately funded research and develop- ment stimulated economic growth, publicly funded research had no economic impact whatsoever. None. This earthshaking result has never been challenged or debunked. It is so inconvenient to the argument that science needs public funding that it is ignored.

In 2007, the economist Leo Sveikauskas of the U.S.Bureau of Labor Statistics con- cluded that returns from many forms of publicly financed R&D are near zero and that “many elements of university and government research have very low returns, overwhelmingly contribute to economic growth only indirectly, if at all.”

As the economist Walter Park of American University in Washington ,D.C., con- cluded, the explanation for this discrepancy is that public funding of research al- most certainly crowds out private funding. That is to say, if the government spends money on the wrong kind of science, it tends to stop researchers from working on the right kind of science.

To most people, the argument for public funding of science rests on a list of the dis- coveries made with public funds, from the Internet (defense science in the U.S.) to the Higgs boson (particle physics at CERN in Switzerland). But that is highly mis- leading. Given that government has funded science munificently from its huge tax take, it would be odd if it had not found out something. This tells us nothing about what would have been discovered by alternative funding arrangements.

And we can never know what discoveries were not made because government fun- ding crowded out philanthropic and commercial funding, which might have had dif- ferent priorities. In such an alternative world,it is highly unlikely that the great ques- tions about life, the universe and the mind would have been neglected in favor of, say, how to clone rich people’s pets.

The perpetual-innovation machine that feeds economic growth and generates pros- perity is not the result of deliberate policy at all, except in a negative sense.

Governments cannot dictate either discovery or invention; they can only make sure that they don’t hinder it. Innovation emerges unbidden from the way that human beings freely interact if allowed. Deep scientific insights are the fruits that fall from the tree of technological change.

Mr. Ridley is the author of “The Evolution of Everything: How New Ideas Emerge,” to be published next week by Harper (which, like The Wall Street Journal, is owned by News Corp). He is a member of the British House of Lords.

https://www.nybooks.com/articles/2003/08/14/whats-not-in-your-genes/

Reviewed:

Nature via Nurture: Genes, Experience, and What Makes Us Human

by Matt Ridley

HarperCollins, 326 pp., $25.95

What’s Not in Your Genes

H. Allen Orr

August 14, 2003 issue

matt-ridley_2003-08-14.jpg

If, by magic, I could make a single interminable debate disappear, I’d probably pick “nature versus nurture". The argument over the relative roles of genes and environ-ment in human nature has been ceaselessly politicized, shows little sign of reso-lution, and has, in general,grown tiresome. This is perhaps most obvious in the bloo- diest battle of the nature–nurture war, the debate over IQ: How much of the variation that we see in intelligence (at least as measured by standardized tests) is due to he-redity and not upbringing? From Francis Galton’s Hereditary Genius (1869) through Stephen Jay Gould’s The Mismeasure of Man (1981) to Richard Herrnstein and Charles Murray’s The Bell Curve (1994), the battle has raged one way and the other, with no clear victor emerging.

It’s good to learn, I suppose,that I’m not the only one who finds the argument annoy- ingly long-lived. The dust jacket of Matt Ridley’s new book, Nature via Nurture, fea-tures statements from a number of scientists and science writers admitting that they had thought it impossible to produce an interesting new book on the subject. In such a climate, if you’re going to attempt yet another work on nature–nurture, you’d better have something truly new, something really big, to say. Matt Ridley does.

Ridley, a science journalist whose previous books include The Red Queen (1993) and the best-selling Genome (1999), has produced a volume that ranges over a vast number of topics, from the genetics of mental illness to the mystery of free will. But at its core are Ridley’s ideas on how to break free of the conflict between nature and nurture.His way out is vaguely Wittgensteinian. We have,he suggests, been asking a meaningless question, making a meaningless distinction. For the question of nature versus nurture makes sense only if the two can be clearly separated. Ridley thinks they cannot. His reason is simple. Despite all the talk about the opposition between genes and environment, it is now clear that learning, intelligence, behavior, and culture — all the ingredients of nurture — involve genes.

... Though his books are usually free of the cloying literary devices that often plague pop science - chief among them the too-cute metaphor that obscures more than cla-rifies - Nature via Nurture is an exception. Thus we get treated to Ridley’s pet name for the evolutionary force that he thinks shapes the contents of our genes: the Genome Organizing Device,or GOD. While several recent science popularizers have been accused of deifying natural selection, Ridley is, to my knowledge, the first to do so literally. Worse,this GOD barely reappears after His early debut and the reasons for His creation remain unclear.

... "

***

https://www.discovery.org/a/1469/

" Blinded by Science




Nature via Nurture: Genes, Experience, & What Makes Us Human, by Matt Ridley (HarperCollins, 336 pp., $25.95)


This is a very strange book, and I am not quite sure what the author is attempting to achieve. At the very least it appears that he wants to shore up genetic determinism as the key factor in understanding human nature and individual behavior.

Genetic determinism is rational materialism’s substitute for the religious notion of predestination; taking the place of God as puppet master are the genes, whose actions and interactions control who we are, what we think, and how we act. This reductionist view received a body blow recently when the mappers of the human ge-nome found that we have only about 30,000 genes. Because of their understanding of human complexity, the scientists were expecting at least 100,000 - and that means there are probably too few genes for strict genetic determinism to be true.

Ridley, a science writer and former U.S. editor of The Economist, tries to ride to the rescue. In doing so, he adds a twist that he hopes will overcome our apparent gene-tic paucity: Yes, he says, our genes decide who we are, what we do and think, and even with whom we fall in love. But, he posits, our molecular masters are not rigidly preset when we are born. Rather,they change continually in reaction to our biological and emotional experiences.

Hence 30000 are more than enough for a soft genetic determinism to be true - which means that the battle between those who believe we are the product of our biology (nature) versus those who believe we are the result of our environment (nurture) can now end in a truce in which both sides win. We are indeed controlled by our genes, but they in turn are influenced by our experiences. Ridley says that the mapping of the genome “has indeed changed everything, not by closing the argument or winning the [nature versus nurture] battle for one side or the other, but by enriching it from both ends till they meet in the middle". To Ridley,the core of our true selves isn’t soul, mind, or even body in the macro sense; we are, in essence, merely the expression of our genes at any given moment.

If this is true,then my perception of Nature via Nurture as so much nonsense was the only reaction I could have had,given my original genetic programming, as later modi- fied by my every experience and emotion from my conception, through the womb, childhood,high school, college,practicing law,the death of my father,indeed up to and including the reading of this book.If that is so - if I was forced by my gene expression of the moment to perceive this book as I have - what have we really learned that can be of any benefit to humankind? We are all slaves to chemistry and there is no escape.

Even aside from such broader issues,Ridley does not make a persuasive case. May-be it is my legal training,but I found his evidence very thin. He doesn’t present proofs so much as resort to wild leaps of logic predicated on questionably relevant social science and facile analogies based on a few animal studies. These are simply not strong enough to be the sturdy weight-supporting pillars that his thesis requires to be credible.Let’s look at just one example.He cites studies of monogamous prairie voles to suggest that humans only think they fall in love, when, in reality, what we call love is merely the expression of genes resulting in the release of the chemicals oxytocin and vasopressin.Claiming that he is not going to “start extrapolating anthropomorphi- cally from pair-bonding in voles to love in people,” he proceeds to do just that. Citing the vole studies and Shakespeare’s A Midsummer Night’s Dream – in which a love potion makes Titania fall in love with a man with a donkey’s head – Ridley writes:

Who would now wager against me that I could not do something like this to a modern Titania? Admittedly, a drop on the eyelids would not suffice. I would have to give her a general anesthetic while I cannulated her medial amygdala and injected oxytocin into it. I doubt even then that I could make her love a donkey. But I might stand a fair chance of making her feel attracted to the first man she sees upon waking. Would you bet against me?

But shouldn’t it take far more than measuring the physical effects of oxytocin on prai-rie voles to prove that something as complex,maddening, unpredictable,and wonder- fully and uniquely human as romantic love can, in reality, be reduced to the mere ex-pression of genes leading to chemical secretions? Not,apparently, to Ridley. “Blindly, automatically,and untaught,we bond with whoever is standing nearest when oxytocin receptors in the medial amygdala get tingled". Gee, if he’d known that, Bill Clinton could have purchased fewer copies of Leaves of Grass.

The most fascinating thing about this book is that Ridley inadvertently makes a splendid argument for intelligent design. At this point, I am sure Ridley’s “I am utterly appalled” genes are expressing wildly. He is, after all, a scientific materialist in good standing. Yet, throughout the book, in order to make his arguments understandable, he resorts explicitly to the imagery of the guiding hand. He even gives it a name: the “Genome Organizing Device,” or “G.O.D.” Ridley claims that the G.O.D is “a skillful chef,whose job is to build a souffle",consisting of the various parts of us and all other life on the planet. Note the language of intentionality in his description of the evolution of the human brain:

To build a brain with instinctive abilities, the Genome Organizing Device lays down separate circuits with suitable internal patterns that allow them to carry out suitable computations, then links them with appropriate inputs from the senses... In the case of the human mind, almost all such instinctive modules are designed to be modified by experience. Some adapt continuously throughout life, some change rapidly with experience then set like cement. A few just develop to their own timetable.

But according to my lay understanding, this violates the theory and philosophy of evolution. The hypothesis of natural selection holds that species origination and change are promoted by genetic mutations. Those mutations that change the organism to make it more likely than its unchanged peers to survive long enough to reproduce are likely to be passed down the generations. Eventually,these genetic al- terations spread among the entire species and become universal within its genome. It is through this dynamic evolutionary process of modification, the theory holds, that life fills all available niches in nature. It is also the process, although the details are not known, by which the primates now known as homo sapiens became conscious.

The philosophy of Darwinism posits that this evolutionary process is aimless, unin-tentional, purposeless, and without rhyme or reason. This means it has no biological goal: It just is. Hence,G.O.D.would not want to “build a brain",develop nature via nur- ture in species,or do any other thing.Yet,throughout the book,Ridley seems able only to describe what he thinks is going on using the language of intention.Could this be because Ridley’s theories would require interactions that are so complex and unlike-ly that they would seem laughable if described as having come together haphazardly by mere chance?

So what are we to learn from his insights? In terms of how we live our lives,not much beyond what common sense already tells us:Parents matter and should engage with their children; human teenagers enjoy doing what they are good at, and dislike doing what they are bad at; and so on. That much is harmless; but Ridley’s deeper point is subversive of human freedom and individual accountability. He denies the existence of free will:Our actions are not causes but effects,“prespecified by,and run by,genes.” Indeed, he claims unequivocally, “There is no ‘me’ inside my brain, there is only an ever-changing set of brain states,a distillation of history,emotion,instinct, experience, and the influence of other people – not to mention chance.”

Ridley asserts this as if it would be a good thing to learn that the complexity and richness of human experience could accurately be reduced to merely the acts of so many slaves obeying the lash of chemical overseers acting under the direction of our experience-influenced gene owners. “Nature versus nurture is dead,” Ridley concludes triumphantly. “Long live nature via nurture.”

Sorry. Maybe it’s my genes, but I just don’t buy it.

Mr. Smith is a senior fellow at the Discovery Institute. His next book will be about the science, morality and business aspects of human cloning. "