Appendix: Is there a simple algorithm for intelligence?

In this book, we've focused on the nuts and bolts of neural networks:how they work, and how they can be used to solve pattern recognitionproblems. This is material with many immediate practicalapplications. But, of course, one reason for interest in neural netsis the hope that one day they will go far beyond such basic patternrecognition problems. Perhaps they, or some other approach based ondigital computers, will eventually be used to build thinking machines,machines that match or surpass human intelligence? This notion farexceeds the material discussed in the book - or what anyone in theworld knows how to do. But it's fun to speculate.

There has been much debate about whether it's even possible forcomputers to match human intelligence. I'm not going to engage withthat question. Despite ongoing dispute, I believe it's not in seriousdoubt that an intelligent computer is possible - although it may beextremely complicated, and perhaps far beyond current technology -and current naysayers will one day seem much like thevitalists.

Rather, the question I explore here is whether there is asimple set of principles which can be used to explainintelligence? In particular, and more concretely, is there asimple algorithm for intelligence?

The idea that there is a truly simple algorithm for intelligence is abold idea. It perhaps sounds too optimistic to be true. Many peoplehave a strong intuitive sense that intelligence has considerableirreducible complexity. They're so impressed by the amazing varietyand flexibility of human thought that they conclude that a simplealgorithm for intelligence must be impossible. Despite thisintuition, I don't think it's wise to rush to judgement. The historyof science is filled with instances where a phenomenon initiallyappeared extremely complex, but was later explained by some simple butpowerful set of ideas.

Consider, for example, the early days of astronomy. Humans have knownsince ancient times that there is a menagerie of objects in the sky:the sun, the moon, the planets, the comets, and the stars. Theseobjects behave in very different ways - stars move in a stately,regular way across the sky, for example, while comets appear as if outof nowhere, streak across the sky, and then disappear. In the 16thcentury only a foolish optimist could have imagined that all theseobjects' motions could be explained by a simple set of principles. Butin the 17th century Newton formulated his theory of universalgravitation, which not only explained all these motions, but alsoexplained terrestrial phenomena such as the tides and the behaviour ofEarth-bound projecticles. The 16th century's foolish optimist seemsin retrospect like a pessimist, asking for too little.

Of course, science contains many more such examples. Consider themyriad chemical substances making up our world, so beautifullyexplained by Mendeleev's periodic table, which is, in turn, explainedby a few simple rules which may be obtained from quantum mechanics.Or the puzzle of how there is so much complexity and diversity in thebiological world, whose origin turns out to lie in the principle ofevolution by natural selection. These and many other examples suggestthat it would not be wise to rule out a simple explanation ofintelligence merely on the grounds that what our brains - currentlythe best examples of intelligence - are doing appears to bevery complicated**Through this appendix I assume that for a computer to be considered intelligent its capabilities must match or exceed human thinking ability. And so I'll regard the question "Is there a simple algorithm for intelligence?" as equivalent to "Is there a simple algorithm which can `think' along essentially the same lines as the human brain?" It's worth noting, however, that there may well be forms of intelligence that don't subsume human thought, but nonetheless go beyond it in interesting ways..

Contrariwise, and despite these optimistic examples, it is alsologically possible that intelligence can only be explained by a largenumber of fundamentally distinct mechanisms. In the case of ourbrains, those many mechanisms may perhaps have evolved in response tomany different selection pressures in our species' evolutionaryhistory. If this point of view is correct, then intelligence involvesconsiderable irreducible complexity, and no simple algorithm forintelligence is possible.

Which of these two points of view is correct?

To get insight into this question, let's ask a closely relatedquestion, which is whether there's a simple explanation of how humanbrains work. In particular, let's look at some ways of quantifyingthe complexity of the brain. Our first approach is the view of thebrain fromconnectomics. Thisis all about the raw wiring: how many neurons there are in the brain,how many glial cells, and how many connections there are between theneurons. You've probably heard the numbers before - the braincontains on the order of 100 billion neurons, 100 billion glial cells,and 100 trillion connections between neurons. Those numbers arestaggering. They're also intimidating. If we need to understand thedetails of all those connections (not to mention the neurons and glialcells) in order to understand how the brain works, then we'recertainly not going to end up with a simple algorithm forintelligence.

There's a second, more optimistic point of view, the view of the brainfrom molecular biology. The idea is to ask how much geneticinformation is needed to describe the brain's architecture. To get ahandle on this question, we'll start by considering the geneticdifferences between humans and chimpanzees. You've probably heard thesound bite that "human beings are 98 percent chimpanzee". Thissaying is sometimes varied - popular variations also give the numberas 95 or 99 percent. The variations occur because the numbers wereoriginally estimated by comparing samples of the human and chimpgenomes, not the entire genomes. However, in 2007 the entirechimpanzee genome wassequenced(see alsohere), and wenow know that human and chimp DNA differ at roughly 125 million DNAbase pairs. That's out of a total of roughly 3 billion DNA base pairsin each genome. So it's not right to say human beings are 98 percentchimpanzee - we're more like 96 percent chimpanzee.

How much information is in that 125 million base pairs? Each basepair can be labelled by one of four possibilities - the "letters"of the genetic code, the bases adenine, cytosine, guanine, andthymine. So each base pair can be described using two bits ofinformation - just enough information to specify one of the fourlabels. So 125 million base pairs is equivalent to 250 million bitsof information. That's the genetic difference between humans andchimps!

Of course, that 250 million bits accounts for all the geneticdifferences between humans and chimps. We're only interested in thedifference associated to the brain. Unfortunately, no-one knows whatfraction of the total genetic difference is needed to explain thedifference between the brains. But let's assume for the sake ofargument that about half that 250 million bits accounts for the braindifferences. That's a total of 125 million bits.

125 million bits is an impressively large number. Let's get a sensefor how large it is by translating it into more human terms. Inparticular, how much would be an equivalent amount of English text?Itturns out that the information content of English text is about 1 bit perletter. That sounds low - after all, the alphabet has 26 letters- but there is a tremendous amount of redundancy in English text.Of course, you might argue that our genomes are redundant, too, so twobits per base pair is an overestimate. But we'll ignore that, sinceat worst it means that we're overestimating our brain's geneticcomplexity. With these assumptions, we see that the geneticdifference between our brains and chimp brains is equivalent to about125 million letters, or about 25 million English words. That's about30 times as much as the King James Bible.

That's a lot of information. But it's not incomprehensibly large.It's on a human scale. Maybe no single human could ever understandall that's written in that code, but a group of people could perhapsunderstand it collectively, through appropriate specialization. Andalthough it's a lot of information, it's minuscule when compared tothe information required to describe the 100 billion neurons, 100billion glial cells, and 100 trillion connections in our brains. Evenif we use a simple, coarse description - say, 10 floating pointnumbers to characterize each connection - that would require about70 quadrillion bits. That means the genetic description is a factorof about half a billion less complex than the full connectome for thehuman brain.

What we learn from this is that our genome cannot possibly contain adetailed description of all our neural connections. Rather, it mustspecify just the broad architecture and basic principles underlyingthe brain. But that architecture and those principles seem to beenough to guarantee that we humans will grow up to be intelligent. Ofcourse, there are caveats - growing children need a healthy,stimulating environment and good nutrition to achieve theirintellectual potential. But provided we grow up in a reasonableenvironment, a healthy human will have remarkable intelligence. Insome sense, the information in our genes contains the essence of howwe think. And furthermore, the principles contained in that geneticinformation seem likely to be within our ability to collectivelygrasp.

All the numbers above are very rough estimates. It's possible that125 million bits is a tremendous overestimate, that there is some muchmore compact set of core principles underlying human thought. Maybemost of that 125 million bits is just fine-tuning of relatively minordetails. Or maybe we were overly conservative in how we computed thenumbers. Obviously, that'd be great if it were true! For our currentpurposes, the key point is this: the architecture of the brain iscomplicated, but it's not nearly as complicated as you might thinkbased on the number of connections in the brain. The view of thebrain from molecular biology suggests we humans ought to one day beable to understand the basic principles behind the brain'sarchitecture.

In the last few paragraphs I've ignored the fact that that 125 millionbits merely quantifies the genetic difference between human andchimp brains. Not all our brain function is due to those 125 millionbits. Chimps are remarkable thinkers in their own right. Maybe thekey to intelligence lies mostly in the mental abilities (and geneticinformation) that chimps and humans have in common. If this iscorrect, then human brains might be just a minor upgrade to chimpanzeebrains, at least in terms of the complexity of the underlyingprinciples. Despite the conventional human chauvinism about ourunique capabilities, this isn't inconceivable: the chimpanzee andhuman genetic lines diverged just5 million years ago, a blink in evolutionary timescales. However, inthe absence of a more compelling argument, I'm sympathetic to theconventional human chauvinism: my guess is that the most interestingprinciples underlying human thought lie in that 125 million bits, notin the part of the genome we share with chimpanzees.

Adopting the view of the brain from molecular biology gave us areduction of roughly nine orders of magnitude in the complexity of ourdescription. While encouraging, it doesn't tell us whether or not atruly simple algorithm for intelligence is possible. Can we get anyfurther reductions in complexity? And, more to the point, can wesettle the question of whether a simple algorithm for intelligence ispossible?

Unfortunately, there isn't yet any evidence strong enough todecisively settle this question. Let me describe some of theavailable evidence, with the caveat that this is a very brief andincomplete overview, meant to convey the flavour of some recent work,not to comprehensively survey what is known.

Among the evidence suggesting that there may be a simple algorithm forintelligence is an experimentreportedin April 2000 in the journal Nature. A team of scientists ledby Mriganka Sur "rewired" the brains of newborn ferrets. Usually,the signal from a ferret's eyes is transmitted to a part of the brainknown as the visual cortex. But for these ferrets the scientists tookthe signal from the eyes and rerouted it so it instead went to theauditory cortex, i.e, the brain region that's usually used forhearing.

To understand what happened when they did this, we need to know a bitabout the visual cortex. The visual cortex contains manyorientation columns. These are little slabs of neurons, each of which respondsto visual stimuli from some particular direction. You can think ofthe orientation columns as tiny directional sensors: when someoneshines a bright light from some particular direction, a correspondingorientation column is activated. If the light is moved, a differentorientation column is activated. One of the most important high-levelstructures in the visual cortex is theorientation map, which charts how the orientation columns are laid out.

What the scientists found is that when the visual signal from theferrets' eyes was rerouted to the auditory cortex, the auditory cortexchanged. Orientation columns and an orientation map began to emergein the auditory cortex. It was more disorderly than the orientationmap usually found in the visual cortex, but unmistakably similar.Furthermore, the scientists did some simple tests of how the ferretsresponded to visual stimuli, training them to respond differently whenlights flashed from different directions. These tests suggested thatthe ferrets could still learn to "see", at least in a rudimentaryfashion, using the auditory cortex.

This is an astonishing result. It suggests that there are commonprinciples underlying how different parts of the brain learn torespond to sensory data. That commonality provides at least somesupport for the idea that there is a set of simple principlesunderlying intelligence. However, we shouldn't kid ourselves abouthow good the ferrets' vision was in these experiments. Thebehavioural tests tested only very gross aspects of vision. And, ofcourse, we can't ask the ferrets if they've "learned to see". Sothe experiments don't prove that the rewired auditory cortex wasgiving the ferrets a high-fidelity visual experience. And so theyprovide only limited evidence in favour of the idea that commonprinciples underlie how different parts of the brain learn.

What evidence is there against the idea of a simple algorithm forintelligence? Some evidence comes from the fields of evolutionarypsychology and neuroanatomy. Since the 1960s evolutionarypsychologists have discovered a wide range of human universals,complex behaviours common to all humans, across cultures andupbringing. These human universals include the incest taboo betweenmother and son, the use of music and dance, as well as much complexlinguistic structure, such as the use of swear words (i.e., taboowords), pronouns, and even structures as basic as the verb.Complementing these results, a great deal of evidence fromneuroanatomy shows that many human behaviours are controlled byparticular localized areas of the brain, and those areas seem to besimilar in all people. Taken together, these findings suggest thatmany very specialized behaviours are hardwired into particular partsof our brains.

Some people conclude from these results that separate explanationsmust be required for these many brain functions, and that as aconsequence there is an irreducible complexity to the brain'sfunction, a complexity that makes a simple explanation for the brain'soperation (and, perhaps, a simple algorithm for intelligence)impossible. For example, one well-known artificial intelligenceresearcher with this point of view is Marvin Minsky. In the 1970s and1980s Minsky developed his "Society of Mind" theory, based on theidea that human intelligence is the result of a large society ofindividually simple (but very different) computational processes whichMinsky calls agents. Inhis book describing the theory, Minsky sums up what he sees as the power ofthis point of view:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.
In a response* *In "Contemplating Minds: A Forum for Artificial Intelligence", edited by William J. Clancey, Stephen W. Smoliar, and Mark Stefik (MIT Press, 1994). to reviews of hisbook, Minsky elaborated on the motivation for the Society of Mind,giving an argument similar to that stated above, based on neuroanatomyand evolutionary psychology:
We now know that the brain itself is composed of hundreds of different regions and nuclei, each with significantly different architectural elements and arrangements, and that many of them are involved with demonstrably different aspects of our mental activities. This modern mass of knowledge shows that many phenomena traditionally described by commonsense terms like "intelligence" or "understanding" actually involve complex assemblies of machinery.
Minsky is, of course, not the only person to hold a point of viewalong these lines; I'm merely giving him as an example of a supporterof this line of argument. I find the argument interesting, but don'tbelieve the evidence is compelling. While it's true that the brain iscomposed of a large number of different regions, with differentfunctions, it does not therefore follow that a simple explanation forthe brain's function is impossible. Perhaps those architecturaldifferences arise out of common underlying principles, much as themotion of comets, the planets, the sun and the stars all arise from asingle gravitational force. Neither Minsky nor anyone else has arguedconvincingly against such underlying principles.

My own prejudice is in favour of there being a simple algorithm forintelligence. And the main reason I like the idea, above and beyondthe (inconclusive) arguments above, is that it's an optimistic idea.When it comes to research, an unjustified optimism is often moreproductive than a seemingly better justified pessimism, for anoptimist has the courage to set out and try new things. That's thepath to discovery, even if what is discovered is perhaps not what wasoriginally hoped. A pessimist may be more "correct" in some narrowsense, but will discover less than the optimist.

This point of view is in stark contrast to the way we usually judgeideas: by attempting to figure out whether they are right or wrong.That's a sensible strategy for dealing with the routine minutiae ofday-to-day research. But it can be the wrong way of judging a big,bold idea, the sort of idea that defines an entire research program.Sometimes, we have only weak evidence about whether such an idea iscorrect or not. We can meekly refuse to follow the idea, insteadspending all our time squinting at the available evidence, trying todiscern what's true. Or we can accept that no-one yet knows, andinstead work hard on developing the big, bold idea, in theunderstanding that while we have no guarantee of success, it is onlythus that our understanding advances.

With all that said, in its most optimistic form, I don'tbelieve we'll ever find a simple algorithm for intelligence. To bemore concrete, I don't believe we'll ever find a really short Python(or C or Lisp, or whatever) program - let's say, anywhere up to athousand lines of code - which implements artificial intelligence.Nor do I think we'll ever find a really easily-described neuralnetwork that can implement artificial intelligence. But I do believeit's worth acting as though we could find such a program or network.That's the path to insight, and by pursuing that path we may one dayunderstand enough to write a longer program or build a moresophisticated network which does exhibit intelligence. And so it'sworth acting as though an extremely simple algorithm for intelligenceexists.

In the 1980s, the eminent mathematician and computer scientistJack Schwartzwas invited to a debate between artificial intelligence proponents andartificial intelligence skeptics. The debate became unruly, with theproponents making over-the-top claims about the amazing things justround the corner, and the skeptics doubling down on their pessimism,claiming artificial intelligence was outright impossible. Schwartzwas an outsider to the debate, and remained silent as the discussionheated up. During a lull, he was asked to speak up and state histhoughts on the issues under discussion. He said: "Well, some ofthese developments may lie one hundred Nobel prizes away"(ref, page 22).It seems to me a perfect response. The key to artificial intelligenceis simple, powerful ideas, and we can and should search optimisticallyfor those ideas. But we're going to need many such ideas, and we'vestill got a long way to go!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值