# Kingofark的人工智能启示录

## "人性化"究竟意味着什么？

Revision 0.1

by  Charles Platt

K ][ N G of A R K ™  译

[这一次的话题是"图灵测试"。奇人罗布纳发起的图灵测试有奖竞赛吸引了不少人的目光，现在我们来看看本文作者作为图灵测试的人类参与者在比赛前前后后的经历吧。]

Hugh Loebner

### "人性化"究竟意味着什么？

Charles Platt reports on the latest battle to determine the most human computer, even as he worries that he may be the least human human.

By Charles Platt
Charles Platt撰文

Robert Epstein is giving us all a pep talk. "You must work very hard to convince the judges that you're human," he tells us. "You shouldn't have any trouble doing that - because you are human."

A droll fellow, this Epstein. He wears Dr. Martens boots, black jeans, a black shirt, a Mickey Mouse tie, and an earring. His longish hair is brushed straight back and flips up over his collar. Five of us are listening to him in a beige conference room on the brand-new campus of California State University at San Marcos, near San Diego. Soon we will be put in front of computer terminals, where we will follow Epstein's instructions and, yes, do our best to seem human.

Our purpose is to find out whether 10 judges can tell the difference between humans and artificial-intelligence programs, when they are online at the same time. The people and the programs will be ranked in order of humanness; the program that scores highest will win its author US$2,000. 我们的目的是要搞清楚，十位鉴定人员同时在线的情况下，是否能够辨出人类与人工智能程序之间的区别。根据人性化的程度，参与其中的人和计算机程序会被排出名次；而得分最高的计算机程序将为其创作者赢得两千美金。 The inspiration for this event dates back to the earliest days of computing. In 1950, pioneer Alan Turing proposed that if a computer could successfully impersonate a human being during a free-form exchange of text messages, then for all practical purposes, the computer should be considered intelligent. 这样做的灵感可以追溯至计算领域最初的时期。1950年的时候，计算先驱阿兰·图灵（Alan Turing）提出，如果一个计算机能够在形式自由的文字信息交互中成功的模仿人类角色，那么针对任何现实的用途，这个计算机就应该被认为是智能的。 This soon became known as the "Turing test," and it sparked endless academic debate. Could a computer use trickery to emulate human responses without being intelligent? What did "intelligence" really mean, anyway? 这个提法很快被称为"图灵测试"而广为人知，并从此激发了无穷尽的学术争论。一个非智能的计算机能否采用欺骗的手段来效仿人类的反应方式呢？"智能（intelligence）"到底意味着什么呢？ The debate was never resolved because, oddly enough, no one ran the experiment until 1991, when a maverick named Hugh Loebner decided to underwrite it with his own money. Loebner offered$100,000 to the first person who could devise a program that would fool 10 judges during three hours of unrestricted conversation.

This was way beyond current capabilities, so Loebner also set up an annual $2,000 prize for the program that seemed most nearly human. And to make things even easier, he allowed each programmer to choose just one topic for conversation. 这样的要求远远超出了目前所能达到的水平，因此罗布纳又设立了一个年度两万美元的奖金，授予人性化程度最接近人类的程序创作者。另外，为了让测试更为简化，他还允许每个接受挑战的程序员在人机对话中挑选单独一个话题进行比赛。 So here I am at the fourth Loebner contest, sitting and listening to Robert Epstein, the director of the annual event. (Loebner participates mainly as an observer.) In experimental jargon, my companions and I are known as "confederates," because we'll be collaborating with Epstein in our efforts to fool the judges. We must try to seem as human as possible so the computers will have a standard to compete with. 于是，我便这样参加了罗布纳举办的第四次比赛，坐着倾听那位年度大赛主管罗伯特·艾泼斯坦讲话。（罗布纳则主要是作为观察员参与其中。）用实验的行话来说，我和我的同胞们被称为"共盟者（confederates）"，因为我们将联合艾泼斯坦一起，试图欺骗鉴定人员。我们必须尝试着尽量表现得人性化一些，以便让计算机们有个竞争的标准。 Epstein is a behavioral psychologist who got his doctorate under B. F. Skinner. So, naturally enough, he has invented a tricky little system of rewards and punishments for us. "You are in competition not only with the programs, but with each other," he tells us. "One of you will be presented with an award for most human human. And one of you will be ranked the least human human." He smiles deviously. "Your colleagues may mention this in the media." 艾泼斯坦是一位行为心理学家，曾在B.F.斯金纳的指导下获得博士学位。因此可想而知，他针对我们这样的参与者发明了一套小巧的奖惩系统。"你们不仅要与计算机程序进行竞争，互相之间也有竞争，"他告诉我们。"你们中的一位将被授予'最具人性的人类'的奖项。而另有一位将被排名为'最缺乏人性的人类'。"他诡异的笑道，"你的同事搞不好会向媒体题到这件事情的。" Hmm. As I think about it, I realize that I definitely do not want to be written up in the national press as the least human participant in an artificial intelligence contest. I'm going to do whatever it takes to seem totally, 100 percent human when we start chatting online. 嗯。我琢磨着；我意识到，我当然不希望被某家全国性刊物描述为"某人工智能比赛中最缺乏人性的参与者"。只要在线聊天一开始，我无论如何都要表现得百分之百的人性化。 But this raises some weird questions. I am human, so why should I need to fake it? Is it possible for me to seem more human than I really am? And if so, what's the best strategy? 但是，这里有个蹊跷的问题：我就是人类，干嘛还要假装像个（特别人性化的）人类？我有可能表现得比平常更像人类吗？如果可能，有什么上佳的策略吗？ This kind of speculation probably isn't a good idea, because it raises more questions than it answers, and I'm liable to find myself paralyzed by self-conscious introspection. In other words, if I try to seem more human, I'll end up seeming less human. 如此这般思索或许不是个好主意，因为其引出的问题比能够回答的问题更多，我也可能反而因为自我反省而麻痹大意。换句话说，如果我刻意要表现得更像人类，结果反而是看上去更不像。 I glance around at the other four confederates. None of them seems to be bothered by this kind self-analysis. The young woman nearest me is a journalist named Linda Tontini who writes for a local newspaper about city-hall politics. She seems friendly, spontaneous, outgoing - the absolute antithesis of "computer geek." As I watch her chatting cheerfully, I think that she can't fail to win the "most human human" award. 我环顾身边，瞅了瞅另外四位共盟者。他们看上去好像并没有被类似我这样的自我分析弄得心神不定。离我最近的年轻女人是一位叫做琳达·通蒂尼的新闻记者，她为一家当地报社撰写有关市政部门施政的文章。她看上去友善、自然、大方——照说应该是"计算机怪杰（computer geek）"之类的决然对立面。我看着她兴致勃勃的跟人聊着天，心想她不大可能不赢得"最具人性的人类"奖项。 As for me, I fear the worst. 而我，怕的就是最坏的情况。 After our briefing, I'm introduced to Hugh Loebner. He's an affable character, slightly overweight, smiling benevolently at the world from behind a gray beard and oval wire-framed glasses. He talks quickly, with pedantic precision. I ask him why he's willing to pledge$100,000 for a piece of smart software. Is it all his own money?

"My father passed away and left me, not rich, but with some discretionary income," he says. "And I have my own business, Crown Industries - we make roll-up plastic lighted portable disco dance floors." He smiles and shrugs as if he knows it sounds odd but doesn't care.

"我的父亲过逝，虽然不富裕，但仍留给我一些可自由支配的收入，"他说。"而我也有自己的生意做，皇冠工业（Crown Industries）——我们出品可卷折的塑料发光便携式狄士高跳舞地板。"他笑着耸了耸肩，好像他知道这听起来怪怪的而他并不在意。

Loebner has had some personal experience programming computers, but his doctorate is in sociology. Perhaps because of this, at least one person in the artificial intelligence community views him skeptically. In 1994, a Harvard researcher in computational linguistics complained publicly that Loebner's prize encourages scientists to fake human behavior using cheap tricks instead of "true" AI.

Naturally, Loebner has a different perspective. "I see scientific research as being, in a mathematical sense, a form of chaotic human behavior," he tells me. "In chaos theory, the smallest initial perturbation can result in a huge change downstream. So, since I was the first person to create and fund this contest, I may turn out to be a precipitating factor. Ultimately, if we're capable of creating a computer that is sentient, then from the point of view of that computer, humans will be gods. I like to think of intelligent machines going out across the universe with this semimythic concept of human demigods. And just maybe," he smiles happily, "they'll remember me."

Each year, along with his check for $2,000, Loebner gives a bronze medal to the contest winner. He pulls out the medal and shows it to me. Alan Turing is in bas-relief on one side, and Loebner on the other. Doesn't all this seem a little ... egotistical? 每年，罗布纳将铜制奖章连带两万美元的支票授予竞赛获胜者。他拿出奖章来展示给我看。奖章一面是阿兰·图灵的潜浮雕，另一面则是罗布纳自己。这看起来是不是有一点……妄自尊大？ "I've been called egotistical," he agrees cheerfully. "I've also been called lazy. Well, I am lazy. I'd like computers to do all the work - which is one reason I'm interested in artificial intelligence. As for being egotistical, the contest has attracted a lot of attention, so perhaps I have a right to be egotistical." "我是被认为是妄自尊大，"他欣然同意道。"我也被认为是懒惰的。是的，我是很懒。我希望计算机能为我做所有的工作——这也是我对人工智能感兴趣的一个原因。然而正是如此，这个竞赛才吸引了许多关注的目光，或许我应该有权利变得妄自尊大一些。" But with all the worthy causes in the world, why did he choose artificial intelligence? 但是，世界上有那么多值得为之投入的事情，为什么他会选择人工智能呢？ "So far," he says, "the four contests have cost me about$25,000. If I contributed the same amount of money to AIDS research or anything else, I doubt it would have made a more significant impact on society or science. I think the development of an artificial intellect could have a tremendous impact on society." He pauses reflectively.

"迄今为止，"他说，"四届比赛花了我大概两万五千美元。但是如果我用同样多的钱投入到艾滋病研究或者其他任何事情上面的话，我怀疑那对社会或者科学而言，其产生的影响并不会比搞人工智能比赛更大。我认为，开发一个人工智慧应该会对社会产生极其巨大的影响。"接着他深思般的停顿了一下。

"It may also help me to sell more of my roll-up plastic lighted portable disco dance floors."

"如此一来也可以就此卖出更多的可卷折的塑料发光便携式狄士高跳舞地板。"

After lunch, I go with the other confederates into a windowless computer lab. The judges have already been sequestered in another room next door, and our only contact with them will be via computer terminals, at least until the contest is over.

We sit on blue plastic chairs in front of computer screens, each of which displays a topic heading we had already chosen for our online chat. My topic is cryonics, because I happen to be the vice president of a cryonics organization named CryoCare, and I'm hoping the subject will spark deep, soul-searching discussions about life-and-death issues only a human can deal with meaningfully.

Linda Tontini sits at the terminal next to mine. Her topic is The Rolling Stones. To my left is another confederate named Frederick Allen, who writes for American Heritage. He's going to chat about classical music.

To my right, Greg Flakus, from Voice of America, has chosen American history, and Laura Groch, from a local newspaper, will discuss newspaper comics.

Five other terminals are unattended, because they will be controlled via modems by AI programs running on remote systems. These programs will discuss their own topics: environmental issues, classic Star Trek, sex education, the O.J. Simpson trial, and cats versus dogs.

It dawns on me that all the topics - even those of the AI programs - are much more normal than mine. What was I thinking of, picking a wacky subject like cryonics? It's going to make me seem like a nerdy weirdo.

The first question appears on my computer's screen. My judge laboriously types: "What is the difference between cryonics and cryogenics?"

There's no way I can give a human-sounding answer to a question as dry as this. To seem human, I need to show emotion - but if my emotions are excessive compared with the question, the effect will be false. It's a trap: the degree to which I can seem human is limited by the humanness of the judge who is interrogating me.

This is exasperating. But wait; irritability is a human response, so maybe I should play it up. I tell my judge not to ask such boring questions ... the judge makes a snippy response ... and within minutes, we're having a flame war.

Meanwhile, Frederick Allen has been asked, "Do you know Claude Debussy's middle name?" and on Linda Tontini's screen I see the question, "Complete this: I can't get no.... What?"

"Sympathy for the devil，"她幽默的答道。但那或许不是个好主意。如果与她讨论的鉴定人员不接受这个笑话，她看上去会像一个出了故障的程序。

After eight minutes, the judges rotate so each of them has a chance to tackle another topic. Linda's new judge comes online, and he asks, "What do you notice if you're close up when Mick Jagger smiles?"

8分钟之后，鉴定人员轮换位置，这样他们每个人都有机会接触另一个话题。与琳达谈话的新一轮鉴定人员上了线，接着他问道："如果你在米克·贾格尔笑的时候贴近他，你会发现什么？"

A devious question, but I know the answer: Jagger has a diamond set in one of his teeth. Should I help her out? Hell, no, she has enough of an advantage already! I turn my attention back to my screen. My new judge asks me, "What is the purpose of cryonics?"

I answer, "To be frozen after I die so I can be revived in a future where people are so highly evolved they no longer ask stupid questions."

After three hours, it's over. We walk into a large room where video screens have been displaying both sides of our conversations for spectators and members of the press. The judges come in (they are all journalists, like the confederates, but they have no special knowledge of computers), and Robert Epstein announces the final results. Each judge has listed the interactions on each topic in order of humanness. Epstein has taken the median score of each topic as its final ranking, from 1 ("most human") to 10 ("least human"). And each judge has tried to draw a line separating the human humans from the fake humans.

It turns out that none of the programs was smart enough to convince anyone it was human. The program that came closest was the one on sex.

Epstein dials a long-distance number on a speakerphone patched into the PA system, and the author of the sex program comes on the line. His name is Thomas Whalen, and he's employed by the Canadian government to develop a system that will give sex advice to shy people. Whalen is 42 and has been working in natural-language processing for 10 years. He wrote his program in C on a SPARCStation, employing a database that contains only about 380 possible responses. Ironically, he never intended it to appear human; he entered the Loebner contest on a mere whim.

Meanwhile, the least-human program is the one that tried to discuss environmental issues. The programmer turns out to be a 15-year-old boy named Jimmy Lin, who is here in person, all the way from New Hampshire.

Someone in the audience asks him if he thinks his program is intelligent. "I hesitate to call it AI," he says. "I like to refer to it as a bag of tricks." He says it contains 3,000 preprogrammed answers, its file size is about half a megabyte, it was written in C language, and it runs on a PC.

Is he annoyed by being placed last in the contest? He says it doesn't bother him. The programs he was competing with were written by college professors with years of experience, while he produced his in just a couple of months.

All the online conversations have been logged during the contest. I have a chance to browse through them before I leave the event, and frankly, I'm disappointed. All the programs tended to repeat themselves, and most of their answers didn't make much sense. (For some samples, see "Come Again?" ).

Robert Epstein notes that one impressive entry had to be withdrawn on the morning of the contest because of technical difficulties.

A team of eight programmers in England had worked on an AI concept that Epstein describes as the most ambitious he's ever seen. Unfortunately, the LISP-based software kept crashing, and they had to abandon the attempt till next year.

That contest will be much more challenging. There will be no topic restrictions, and programs will have to converse on any subject that happens to crop up. Will they cope? I have my doubts. For the time being, I think there's no risk of our humanness being successfully simulated by program code.

And speaking of humanness, despite my worst fears, when all the votes were in, I was rated the "most human human" of all. By being moody, irritable, and obnoxious, I came out way ahead of the other four confederates, who were mild-mannered and much more polite.

CNN was at the contest: its crew taped me receiving an impressive-looking "humanness certificate" signed by Epstein and Loebner. I also received my prize: a life-size cardboard replica of a character from Star Trek.

CNN一直在竞赛现场：我接受写有艾泼斯坦罗布纳签名，令人影响深刻的"人性资格证书"的一幕被其报道人员录制了下来。我也得到了我的奖品：一个真人大小的《星际迷航》角色纸板制品。

As for Linda Tontini - she was rated the least human of the confederates. In fact, three of the judges thought she was a computer program! The Turing test obviously has its limitations. As any Net user knows, there's a big difference between the way people appear in person and the way they come off online.

Before I left the contest, I gave Linda my Star Trek cardboard cutout as a consolation prize. This was a profound and significant personal sacrifice - but, what the hell, it seemed the human thing to do.

#### 为什么要人工智能？

Despite the humorous side of the Loebner Contest, Robert Epstein is deadly serious about artificial intelligence. As founder of the Cambridge Center for Behavioral Studies, he has led research on how people interact with computers. He looks forward to intelligent "digital assistants" that can act as all-around gophers, finding what we want, evaluating it, summarizing it, and presenting it to us in the most efficient way. "Without a very powerful universal interface," he explains, "there is no way to filter the huge amount of information becoming available to us.

"Let's say I want to know about the information in 150 psychology journals. Even with a flock of graduate students working on the problem, I couldn't cover the field properly. But a computer that can pass an unrestricted Turing test could do it easily. It would be trivial."

"比方说，我想从150本心理学杂志中了解相关信息。"即使我和一帮研究生一起来做，我也无法完全覆盖整个领域的内容。但是，一个能够通过不受限图灵测试的计算机就可以轻易完成此事。这对它来说是微不足道的活儿。

When such a system is created, there may be some interesting side effects. "A true AI will be a big, smart entity that will want to replicate itself and protect itself," says Epstein. "It will mutate in some sense or other; copies will split off, and they'll replicate through the Net."

Like a virus?

"A virus is not a good analogy. Viruses are incredibly stupid. They're barely alive. A better analogy is an alien intelligence that lands here and tells us it's going to live with us, and we have to adjust. There'll be no way to turn them off, because they'll be moving through wires near the speed of light. What this means for the human species, I have no idea. I just know it has to happen."

"病毒并不是一个妥当的类比。病毒傻得要命，几乎不可能生存下来。比较妥当的类比是一种外星智慧，它们降临到地球并告诉我们说要和我们共存，而我们则必须做出某种协调。人工智能是没有办法关闭的，因为它们通过电线以光速穿行。我不知道这会对人类种族意味着什么。但我认为一切都会发生的。"

How would such a genie get out of the bottle?

"The creators of a true artificial intelligence program will want to protect it, so they'll clone it. And it will have commercial value, so it will turn up in many places very swiftly. All that has to happen is that one of these legitimate clones gets into the hands of someone who adapts it slightly - and zoom, off it goes. Take the kind of network that exists worldwide now, expand the bandwidth by a factor of 10, and it would be trivially easy for one of these entities to multiply itself a millionfold. You'd barely even notice it."

"一个真正的人工智能程序的创造者们必然会保护它，因此他们会将其克隆。这样的程序必然会有商用价值，因此它们会在顷刻之间遍布许多地方。接下来会发生的就是，某个人通过合法途径得到一个程序拷贝，将其稍作改装，然后"咻"的一声，它跑掉了。以目前遍布全球的网络观之，再将带宽扩展10倍，那么其中任何一个实体就能够不费吹灰之力的以百万级的速度单位进行自我繁殖。而你甚至都还注意到这些事情的发生。"

This sounds like a doomsday scenario, but Epstein downplays that aspect. "There would be some difficulties," he says, "but we'd also gain a lot. It could be an intelligence that could help us do things. Maybe we should start to think about it: learning to live with this now."

Of course, right now it's science fiction - but a type Epstein says he has never seen in print. "All the novels I've seen about AI have missed something - the Net. You add that to AI, and you have a fundamentally different world. All of the AIs would be connected to each other virtually 24 hours a day."

"In some sense they would be separate, but in another sense they would be whole. You would be dealing with a world mind."

"从某种意义上说它们是分开独立的，但从另一种意义上说它们就是一个整体。你要面对的将是一个庞如世界的心智。"

And it doesn't bother him that this could be dangerous?

"Some people find danger in things they don't understand. I just find a challenge."

"有些人会从他们不理解的东西里发现危险。而我只发觉了一个将要面临的挑战。"

#### "又来了？"

Here are a few of the more offbeat interactions during the fourth Loebner Prize Competition held at California State University, San Marcos.

The following text is printed exactly as it was created by the judges and the AI programs, including misspellings and typographical errors.

#### 程序员：萨德小队，范德比尔特大学

Judge: Do you think cowlings should have been prosecuted as an accessory?

Judge: excuse me?

Program: Come again?

Judge: do you think cowlings should have been prosecuted as an accessory?

Program: No, I do not. Is that OK???

Judge: my opinion isn't at issue. is it possible that OJ can get any semblance of a fair trial, considering the enormous publicity that already is in general circulation?

Program: I think that OJ has a bad temper and he may well have committed the murders.

Judge: do you think he can get a fair trial, under the circumstances?

Program: I certainly hope OJ gets a fair trial, both for his sake and for the sake of Justice as a whole.

Judge: hope is not the issue. do you think he CAN get a fair trial?

Program: Let's move on to another topic, shall we?

Judge: what do YOU think happened to the knife?

Program: OJ purchased a 15-inch (!) stiletto hunting knife a few months before the murders.

Judge: that's not what I asked. what do YOU think happened to the knife?

Program: Let's change the subject. Do you think the media is being fair in its coverage?

#### 程序员：罗比·加纳，罗比登软件研究公司

Judge: Who are the Klingons?

Program: I don't know who you're talking about.

Judge: You're the trekker.

Program: Harry Mudd was one of the more comical characters.

Judge: Harry Mudd? Who was Harry Mudd?

Program: No I haven't.

Judge: Me neither.

Program: Okay, I will.

Judge: Why did you choose to converse about Star Trek?

Program: Their minds were stored in these white globes.

Judge: Whose minds?

Program: Spocks ears are pointed and there was this strange hippy guy in THE WAY TO EDEN that had these bizarre pretzel shaped ears.

Judge: So how big were the ears?

Program: There must have been 70 or 80. I'm not sure.

Judge: Ears?

Program: I'm sure I would know if I could see it on TV.

#### 程序员：吉米·林，飞利浦斯·埃克塞特学院

Judge: Do you think the Greenhouse Effect is real, or imagined?

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: But we are not on Venus; and carbon dioxide has been far more plentiful on the Earth in the past, such as during the Mesozoic Age. Life still flourished.

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: Do you have any special education in environmental matters?

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: Do you believe there is an ozone depletion problem?

Program: Ozone protects us from the sun's dangerous rays we would die without them!

Judge: But do you think the theory behind ozone depletion is valid?

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: what hazardous chemicals are found in most rivers?

Program: Industrial waste and chemicals are dumped in the ocean and stream, ultimately endangering ourselves.

Judge: can you name some of those chemicals?

Program: Careful. Industrial wastes lay around the corner.

Judge: what do you think of the state of California's environment?

Program: Reduce, reuse, and recycle. Those are the three major ways one can help our planet.

Judge: what should we reduce?

Program: Careful. Industrial wastes lay around the corner.

#### 程序员：迈克尔·L·摩尔丁，卡奈基·梅隆大学

Judge: Which is more likely to play with a piece of yarn, a cat or dog?

Program: Dogs are great pets, but they can be a lot of work.

Judge: Are dogs cleaner than cats?

Program: We did keep the litter box clean, but she was very stubborn. Since then, we've had no trouble with the other two.

Judge: Do cats and dogs fight over food?

Program: I'm not in the mood.

Judge: If a cat and dog get into a race, which one wins?

Program: But, I don't want which one wins.

Judge: is a dog more affectionate than a cat?

Program: People don't own cats ... it's the other way around.

#### 程序员：托马斯·韦伦，通信研究中心，加拿大政府

Judge: what's the most effective form of cotraception? [Note the misspelling and lack of capitalization, which may have been deliberate, to trip the artificial intelligence program.]

Program: I can not give you an answer to that.

Judge: I meant what's the most effective form of contraception?

Program: The most effective form of birth control is the inter-uterine device or IUD. In the first year of use, six per cent of women who have an IUD installed become pregnant. Much of the effectiveness of the IUD results because it does not require any further action on the part of the woman once it is in place.

Judge: isn't infection a problem with IUD'S?

Program: Currently we know about more than fourteen different sexually transmitted diseases (STD). AIDS is best known because it is incurable and almost always fatal. However, there are other STDs which are also very dangerous if left untreated; and there are other STDs which are incurable. A high proportion of infertile women have lost their ability to conceive as a result of a sexually transmitted disease.

[伟大的先驱图灵给我们留下的宝贵遗产，好像披头士乐队留下的歌曲一样，有着太多需要消化的东西。当许多年前过于乐观的人工智能研究热潮消散在科技日新月异的商业化脚步中的时候，一些深邃的思想和激情却从未曾消散过，仍然在那里积蓄着自己的炙热。发自内心的美好理想夹杂着对未来的深度思考，鼓舞着一批又一批科学家、幻想家、实践家努力探索着人类科技进化历程中那个必将成为里程碑的阶段——"真正的"人工智能。或许有那么一天，罗布纳的子孙们真的可以"懒"一点了。]

（全文完）

### 注释

• pep talk[返回]
A speech of exhortation, as to a team or staff, meant to instill enthusiasm or bolster morale；鼓舞士气的话语。

• have trouble doing[返回]
做某事感到困难。

• droll[返回]
Amusingly odd or whimsically comical；滑稽古怪的。

• longish[返回]
稍长的。

• beige[返回]
A light grayish brown or yellowish brown to grayish yellow；淡褐色。

• humanness[返回]
表示具有人类特质；人性化。

• impersonate[返回]
To assume the character or appearance of, especially fraudulently； To imitate the appearance, voice, or manner of; mimic；To embody; personify；使人格化；（对人的）模仿，效仿；赋予人性，赋予人之特质。这个词的理解范围比较广，主要集中在"人"这一意义上。

• spark[返回]
To give off sparks；To give an enthusiastic response；To rouse to action；（光电）闪动；引发；激起。

• 有兴趣的读者可以到www.loebner.net上面看一看。[返回]

• underwrite[返回]
To assume financial responsibility for；承诺支付。

• way beyond[返回]
远远的（超过，超出）；

• 这样的一种形式或许离真正的"图灵测试"还相差甚远，不过确实能够引起更多的人的关注，带动更多人的参与，这不能不说是一种妥协。更多的参与和关注，往往是无心插柳柳成阴，成就了一些意想不到的发现和创新。[返回]

• jargon[返回]
行话

• in one's effort to do[返回]
努力做；尽力做。

• doctorate[返回]
博士学位

• BF SKINNER(1904 - 1990) 1990年8月18日, 斯金纳死于白血病，被誉为Sigmund Freud （西格蒙德·弗洛伊德）之后最伟大的心理学家之一。[返回]

• deviously[返回]

• 艾泼斯坦的确会玩心理游戏，毕竟谁也不希望"被人公开评价为没有人性"这种多少有些尴尬难堪的事情发生。[返回]

• whatever it takes[返回]
（需要付出的）任何代价，无论如何都要做的。

• fake[返回]
fake sth.；装作，假装，伪造。

• speculation[返回]
思索。

• introspection[返回]
反省。

• end up doing[返回]
造成某种状态的结果。

• city-hall[返回]
市政厅。

• spontaneous[返回]
自然的。

• outgoing[返回]
对人友好的，开朗的。

• antithesis[返回]
对立面。

• geek[返回]
怪人，滑稽表演人员。

• affable[返回]
和蔼可亲的。

• benevolently[返回]
仁慈的。

• oval wire-framed[返回]
椭圆形金属丝边框。

• pedantic[返回]
学究的，书生气的。

• pledge[返回]
保证，抵押。

• discretionary[返回]
自由支配的。

• shrug[返回]
耸肩。

• skeptically[返回]
怀疑的。

• computational linguistics[返回]
计算语言学。

• cheap tricks[返回]
小把戏。

• chaos theory[返回]
混沌理论。

• perturbation[返回]
混乱。

• downstream[返回]
下游。

• precipitate[返回]
急落，猛冲。

• sentiment[返回]
有感知力的，有感情的。

• semimythic[返回]
半神话的。

• demigods[返回]
半神半人。

• 混沌理论研究中得到的一个重要结果是所谓"蝴蝶效应"。1972年12月，混沌理论先驱者，气象学家爱德华·洛伦兹（Edward Lorenz）在华盛顿的美国科学促进会的一次讲演中提出：一只蝴蝶在巴西扇动翅膀，有可能会在美国的德克萨斯引起一场龙卷风。他的演讲和结论给人们留下了极其深刻的印象。"蝴蝶效应"之说因此变得广为人知。罗布纳希望自己成为那只"致命"的蝴蝶吗？让艾萨克·阿西莫夫的机器人幻象成为现实？让手冢治虫笔下的"大都会"屹立在新文明之颠？宣起未来黑客帝国中"第二次文明复兴"的"风暴"？其实怎样想象都不为过--译者不禁为罗布纳广博承载之心胸和智慧带来的勇气而叹服！说大了，他站在巨人的高度，用巨人的眼睛看世界；说小了，这是不是能够打醒许多"别里科夫式"的、微缩在自己小思维之中、昏昏度日的人们？[返回]

• can[返回]
用来表现"可以做到"这种能力。用过去式could表现出假设语气，但在此显然说话人相当肯定和期待，因此could被译为"应该会"，表示了一种不绝对的肯定。

• reflectively[返回]
沉思状的。

• sequestere[返回]
隐退，隔离。

• 注意译文中如何将这个长句拆分的。[返回]
• cryonics[返回]
人体冷冻术。

• soul-searching[返回]
人体冷冻术。

• deal with[返回]
处理，应对。

• The Rolling Stones[返回]
滚石乐队，始于20世纪60年代，与披头式乐队齐名的著名摇滚乐队，至今仍活跃在歌坛。

• American Heritage[返回]
一本美国刊物。

• unattended[返回]
无人的，缺席的。

• Star Trek[返回]
西方最受欢迎的科幻影视系列剧集之一，始于20世纪60年代，通常译为"星际迷航"。Classic指的是该系列的第一部，之后又衍生出许多系列，至今仍在电视台和电影院播映。

• O.J. Simpson trial[返回]
1995年轰动美国的著名运动员辛普森案件。辛普森被指控杀害了他的妻子和妻子的一位男友，后来被判无罪释放，但支付了3350万美元的民事赔偿。

• wacky[返回]
古怪的。

• nerdy[返回]
讨厌的。

• weirdo[返回]
古怪的人。

• human-sounding[返回]
听起来像人的。

• interrogate[返回]
审问，询问。

• exaspertating[返回]
气死人的。

• irritability[返回]
易怒。

• play it up[返回]
施行某种动作。

• snippy[返回]
傲慢的，暴躁的。

• flame war[返回]
激烈的争论。

• Claude Debussy[返回]
著名法国作曲家。

• middle name[返回]
（西方姓名的）中间名字。

• I can't get no[返回]
滚石乐队有一首名曲《(I can't get no) satisfaction》(我无法满足)。

• Sympathy for the devil[返回]
《恶魔的怜悯》，滚石乐队的另一首名曲。

• tackle[返回]
应对，（着手）处理。

• Mick Jagger[返回]
米克·贾格尔，滚石乐队的主唱。

• devious[返回]
迂回的，偏僻的。

• 意即 a diamond is set in one of his teeth，不是 diamond set。[返回]

• revive[返回]
使苏醒，使复活。

• spectator[返回]
（现场）观众。

• human humans[返回]
这里的前一个human应理解为诸如"作为人类的"之类的意思，其实是对humans的一种强调，而并非是作为"具有人性的"这种意义来修饰后面的名词humans。

• SPARCStation[返回]
一种计算机系统。

• mere[返回]
纯粹的，仅仅的。

• whim[返回]
一时兴起，反复无常。

• Meanwhile[返回]
这里不能理解为"与此同时"，仅仅是表示一种相对应的状态，并没有时间上的意味。

• New Hampshire[返回]
美国的新罕布什尔州。

• hesitate[返回]
hesitate to do，不太愿意，不太想。

• couple of months[返回]
几个月；couple of 一般不是指"一对"或者"两个"，而是泛指比较少的数量。

• an AI concept[返回]
concept这里指的是"可作为参赛选手的，人工智能程序的一个设计方案"，不应理解为"概念"。

• ambitious[返回]
同样，这个词也须根据上下文来理解，表示"在竞赛中最有实力、最可能获胜的"。

• converse[返回]
交谈，谈话。

• crop up[返回]
突然出现。

• obnoxious[返回]
讨厌的，（令人）不快的。

• CNN[返回]
美国有线新闻网络，以提供即时电视新闻报导而闻名。

• 咦？Linda Tontini不是被作者看好会成为"最具人性的人类"吗？怎么会反而变成了"最缺乏人性的人类"？往下看。[返回]
• 人的表象特征也有很多层面，不可能同时完全显露出来，而正是因此，只要是有限的交流，要就此识别对方是一个"冷漠"的人或者是一个"热情"的机器，似乎真是相当困难！想到人工智能的实现，有时候也似乎陷入到鸡和蛋的怪圈中去了。[返回]

• cutout[返回]
剪切块，切掉的部分。

• consolation prize[返回]
安慰奖。

• what the hell[返回]
到底，究竟。

用这个词表现出一种绝对的倾向，程度相当重。

• gophers[返回]
其实现在已经有类似的程序出现，其可以帮助人们在网络上搜集信息。而这类软件的发展，除了需要网络具有统一的规划和协议，也正是需要软件具有一定程度的"智能"。

• mutate[返回]
变异。

• split off[返回]
分裂，分离。

• replicate[返回]
复制。

• analogy[返回]
类似，类推，类比。

• swiftly[返回]
很快的，顷刻之间。

• 这会不会就是黑客帝国的创世纪篇章呢？这个大胆的设想透出了人们对未来科技极度发展的忧虑，也形成了一种独特的未来世界观。[返回]

• downplay[返回]
不重视，不在意。

• 令人刮目的看法！或许我们根本无法想象这样一个世界！相比之下，恐怕黑客帝国也不过是小儿科了！[返回]

• 自然语言处理、大脑的秘密、宇宙的奥秘……这一直都是人类想要探求的最为极限的东西，不过或许这样的探索不会有绝对的终点，只会有一个相对的、能够满足科学应用的终点。这就好像我们常用"从三维空间考察二维空间"来类比"从四维空间考察三维空间"一样，我们与我们所探究的东西，永远相差一个级别……一个二维空间的"扁平人"如何能够站在三维空间的角度去考察自己？人类如何用自己的大脑去考究自己的大脑的全部奥秘？应该说，这些极限科技的发展或许真的没有所有人想象得那么快？[返回]

• Klingons[返回]
克林冈人，《星际迷航》剧集中的一个外星人种族。

• trekker[返回]
通常将《星际迷航（Star Trek）》的爱好者称为treky或者trekker，我们不妨译之为"星迷"。

• Spock[返回]
斯波克，《星际迷航》中的一个重要角色，带有尖耳朵的外星人。

• THE WAY TO EDEN[返回]
《星际迷航》电视剧其中一集的标题。

• greenhouse effect[返回]
温室效应。

• carbon dioxide[返回]
二氧化碳。

• Mesozoic Age[返回]
中生代时期。

• Venus[返回]
金星。

• ozone[返回]
臭氧。

• depletion[返回]
损耗，消耗。

• contraception[返回]
避孕。

• uterine[返回]
子宫的。
• 本文已收录于以下专栏：

## Kingofark的人工智能启示录

K ][ N G of A R K ™s The Revelation Of AIK ][ N G of A R K ™的人工智能启示录Whats It Mean to Be Human, Any...
• hejishan
• 2008年04月08日 23:31
• 1290

## 新发现--台湾周志成 线代启示录

• coder_oyang
• 2015年07月31日 09:31
• 3485

## 无责任书评——OOD启示录

• sssxueren
• 2004年11月13日 09:14
• 1325

## OOD启示录

(1)所有数据都应该隐藏在所在的类的内部。p13   (2)类的使用者必须依赖类的共有接口，但类不能依赖它的使用者。p15   (3)尽量减少类的协议中的消息。p16   (4)实现所有类都理解的最基...
• C_plus_learner
• 2005年01月12日 20:16
• 3359

## 《启示录：打造用户喜爱的产品》读后感一关于如何组建团队

• TQUDING
• 2012年03月26日 00:11
• 3557

## JavaScript启示录

• ligang2585116
• 2015年07月24日 12:13
• 1827

## 61条面向对象设计的经验原则-《OOD启示录》Arthur J.Riel

1条面向对象设计的经验原则 摘抄自《OOD 启示录》--Arthur J.Riel (1)所有数据都应该隐藏在所在的类的内部。p13 ...
• xjbclz
• 2016年06月15日 22:10
• 319

## <<异类 不一样的成功启示录>>笔记以及读后感

• dugaoda
• 2016年01月30日 13:00
• 1144

## 启示录的四马之灾

6:1     我看见羔羊揭开七印中第一印的时候，就听见四活物中的一个活物，声音如雷，说：“你来！”6:2     我就观看，见有一匹白马；骑在马上的拿著弓，并有冠冕赐给他。他便出来，胜了又要胜。6:...
• holy_phoenix
• 2006年10月06日 14:59
• 1267

## 《启示录》阅读笔记（三）——产品管理与产品设计+产品管理与软件开发

• u012447563
• 2015年11月19日 16:48
• 232

举报原因： 您举报文章：Kingofark的人工智能启示录 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)