K ][ N G of A R K ™'s The Revelation Of AI
K ][ N G of A R K ™的人工智能启示录

What's It Mean to Be Human, Anyway?


Revision 0.1

by  Charles Platt  

K ][ N G of A R K ™  译


Hugh Loebner

What's It Mean to Be Human, Anyway?


Charles Platt reports on the latest battle to determine the most human computer, even as he worries that he may be the least human human.


By Charles Platt
Charles Platt撰文

Robert Epstein is giving us all a pep talk. "You must work very hard to convince the judges that you're human," he tells us. "You shouldn't have any trouble doing that - because you are human."


A droll fellow, this Epstein. He wears Dr. Martens boots, black jeans, a black shirt, a Mickey Mouse tie, and an earring. His longish hair is brushed straight back and flips up over his collar. Five of us are listening to him in a beige conference room on the brand-new campus of California State University at San Marcos, near San Diego. Soon we will be put in front of computer terminals, where we will follow Epstein's instructions and, yes, do our best to seem human.


Our purpose is to find out whether 10 judges can tell the difference between humans and artificial-intelligence programs, when they are online at the same time. The people and the programs will be ranked in order of humanness; the program that scores highest will win its author US$2,000.


The inspiration for this event dates back to the earliest days of computing. In 1950, pioneer Alan Turing proposed that if a computer could successfully impersonate a human being during a free-form exchange of text messages, then for all practical purposes, the computer should be considered intelligent.

这样做的灵感可以追溯至计算领域最初的时期。1950年的时候,计算先驱阿兰·图灵(Alan Turing)提出,如果一个计算机能够在形式自由的文字信息交互中成功的模仿人类角色,那么针对任何现实的用途,这个计算机就应该被认为是智能的。

This soon became known as the "Turing test," and it sparked endless academic debate. Could a computer use trickery to emulate human responses without being intelligent? What did "intelligence" really mean, anyway?


The debate was never resolved because, oddly enough, no one ran the experiment until 1991, when a maverick named Hugh Loebner decided to underwrite it with his own money. Loebner offered $100,000 to the first person who could devise a program that would fool 10 judges during three hours of unrestricted conversation.


This was way beyond current capabilities, so Loebner also set up an annual $2,000 prize for the program that seemed most nearly human. And to make things even easier, he allowed each programmer to choose just one topic for conversation.


So here I am at the fourth Loebner contest, sitting and listening to Robert Epstein, the director of the annual event. (Loebner participates mainly as an observer.) In experimental jargon, my companions and I are known as "confederates," because we'll be collaborating with Epstein in our efforts to fool the judges. We must try to seem as human as possible so the computers will have a standard to compete with.


Epstein is a behavioral psychologist who got his doctorate under B. F. Skinner. So, naturally enough, he has invented a tricky little system of rewards and punishments for us. "You are in competition not only with the programs, but with each other," he tells us. "One of you will be presented with an award for most human human. And one of you will be ranked the least human human." He smiles deviously. "Your colleagues may mention this in the media."


Hmm. As I think about it, I realize that I definitely do not want to be written up in the national press as the least human participant in an artificial intelligence contest. I'm going to do whatever it takes to seem totally, 100 percent human when we start chatting online.


But this raises some weird questions. I am human, so why should I need to fake it? Is it possible for me to seem more human than I really am? And if so, what's the best strategy?


This kind of speculation probably isn't a good idea, because it raises more questions than it answers, and I'm liable to find myself paralyzed by self-conscious introspection. In other words, if I try to seem more human, I'll end up seeming less human.


I glance around at the other four confederates. None of them seems to be bothered by this kind self-analysis. The young woman nearest me is a journalist named Linda Tontini who writes for a local newspaper about city-hall politics. She seems friendly, spontaneous, outgoing - the absolute antithesis of "computer geek." As I watch her chatting cheerfully, I think that she can't fail to win the "most human human" award.

我环顾身边,瞅了瞅另外四位共盟者。他们看上去好像并没有被类似我这样的自我分析弄得心神不定。离我最近的年轻女人是一位叫做琳达·通蒂尼的新闻记者,她为一家当地报社撰写有关市政部门施政的文章。她看上去友善、自然、大方——照说应该是"计算机怪杰(computer geek)"之类的决然对立面。我看着她兴致勃勃的跟人聊着天,心想她不大可能不赢得"最具人性的人类"奖项。

As for me, I fear the worst.


After our briefing, I'm introduced to Hugh Loebner. He's an affable character, slightly overweight, smiling benevolently at the world from behind a gray beard and oval wire-framed glasses. He talks quickly, with pedantic precision. I ask him why he's willing to pledge $100,000 for a piece of smart software. Is it all his own money?


"My father passed away and left me, not rich, but with some discretionary income," he says. "And I have my own business, Crown Industries - we make roll-up plastic lighted portable disco dance floors." He smiles and shrugs as if he knows it sounds odd but doesn't care.

"我的父亲过逝,虽然不富裕,但仍留给我一些可自由支配的收入,"他说。"而我也有自己的生意做,皇冠工业(Crown Industries)——我们出品可卷折的塑料发光便携式狄士高跳舞地板。"他笑着耸了耸肩,好像他知道这听起来怪怪的而他并不在意。

Loebner has had some personal experience programming computers, but his doctorate is in sociology. Perhaps because of this, at least one person in the artificial intelligence community views him skeptically. In 1994, a Harvard researcher in computational linguistics complained publicly that Loebner's prize encourages scientists to fake human behavior using cheap tricks instead of "true" AI.


Naturally, Loebner has a different perspective. "I see scientific research as being, in a mathematical sense, a form of chaotic human behavior," he tells me. "In chaos theory, the smallest initial perturbation can result in a huge change downstream. So, since I was the first person to create and fund this contest, I may turn out to be a precipitating factor. Ultimately, if we're capable of creating a computer that is sentient, then from the point of view of that computer, humans will be gods. I like to think of intelligent machines going out across the universe with this semimythic concept of human demigods. And just maybe," he smiles happily, "they'll remember me."


Each year, along with his check for $2,000, Loebner gives a bronze medal to the contest winner. He pulls out the medal and shows it to me. Alan Turing is in bas-relief on one side, and Loebner on the other. Doesn't all this seem a little ... egotistical?


"I've been called egotistical," he agrees cheerfully. "I've also been called lazy. Well, I am lazy. I'd like computers to do all the work - which is one reason I'm interested in artificial intelligence. As for being egotistical, the contest has attracted a lot of attention, so perhaps I have a right to be egotistical."


But with all the worthy causes in the world, why did he choose artificial intelligence?


"So far," he says, "the four contests have cost me about $25,000. If I contributed the same amount of money to AIDS research or anything else, I doubt it would have made a more significant impact on society or science. I think the development of an artificial intellect could have a tremendous impact on society." He pauses reflectively.


"It may also help me to sell more of my roll-up plastic lighted portable disco dance floors."


After lunch, I go with the other confederates into a windowless computer lab. The judges have already been sequestered in another room next door, and our only contact with them will be via computer terminals, at least until the contest is over.


We sit on blue plastic chairs in front of computer screens, each of which displays a topic heading we had already chosen for our online chat. My topic is cryonics, because I happen to be the vice president of a cryonics organization named CryoCare, and I'm hoping the subject will spark deep, soul-searching discussions about life-and-death issues only a human can deal with meaningfully.


Linda Tontini sits at the terminal next to mine. Her topic is The Rolling Stones. To my left is another confederate named Frederick Allen, who writes for American Heritage. He's going to chat about classical music.

琳达·通蒂尼坐在我左边的终端面前。她的话题是滚石乐队。我左边的另一位共盟者名叫弗莱德里克·阿兰,他为American Heritage撰写文章。他准备聊关于古典音乐的话题。

To my right, Greg Flakus, from Voice of America, has chosen American history, and Laura Groch, from a local newspaper, will discuss newspaper comics.

在我的右边,来自美国之音(Voice of America)的格雷格·弗拉库斯选择了关于美国历史的话题;来自一家当地报社的劳拉·克罗奇则打算谈论报载漫画。

Five other terminals are unattended, because they will be controlled via modems by AI programs running on remote systems. These programs will discuss their own topics: environmental issues, classic Star Trek, sex education, the O.J. Simpson trial, and cats versus dogs.


It dawns on me that all the topics - even those of the AI programs - are much more normal than mine. What was I thinking of, picking a wacky subject like cryonics? It's going to make me seem like a nerdy weirdo.


The first question appears on my computer's screen. My judge laboriously types: "What is the difference between cryonics and cryogenics?"


There's no way I can give a human-sounding answer to a question as dry as this. To seem human, I need to show emotion - but if my emotions are excessive compared with the question, the effect will be false. It's a trap: the degree to which I can seem human is limited by the humanness of the judge who is interrogating me.


This is exasperating. But wait; irritability is a human response, so maybe I should play it up. I tell my judge not to ask such boring questions ... the judge makes a snippy response ... and within minutes, we're having a flame war.


Meanwhile, Frederick Allen has been asked, "Do you know Claude Debussy's middle name?" and on Linda Tontini's screen I see the question, "Complete this: I can't get no.... What?"

与此同时,弗莱德里克·阿兰被鉴定人员问道:"你知道克劳德·德彪西的中间名字吗?";而在琳达·通蒂尼的屏幕上,我看到这样的问题:"把这一句接完:I can't get no.... 后面是什么?"

"Sympathy for the devil," she replies humorously. But maybe that's not such a great idea. If her judge doesn't get the joke, she'll seem like a malfunctioning program.

"Sympathy for the devil,"她幽默的答道。但那或许不是个好主意。如果与她讨论的鉴定人员不接受这个笑话,她看上去会像一个出了故障的程序。

After eight minutes, the judges rotate so each of them has a chance to tackle another topic. Linda's new judge comes online, and he asks, "What do you notice if you're close up when Mick Jagger smiles?"


A devious question, but I know the answer: Jagger has a diamond set in one of his teeth. Should I help her out? Hell, no, she has enough of an advantage already! I turn my attention back to my screen. My new judge asks me, "What is the purpose of cryonics?"


I answer, "To be frozen after I die so I can be revived in a future where people are so highly evolved they no longer ask stupid questions."


After three hours, it's over. We walk into a large room where video screens have been displaying both sides of our conversations for spectators and members of the press. The judges come in (they are all journalists, like the confederates, but they have no special knowledge of computers), and Robert Epstein announces the final results. Each judge has listed the interactions on each topic in order of humanness. Epstein has taken the median score of each topic as its final ranking, from 1 ("most human") to 10 ("least human"). And each judge has tried to draw a line separating the human humans from the fake humans.


It turns out that none of the programs was smart enough to convince anyone it was human. The program that came closest was the one on sex.


Epstein dials a long-distance number on a speakerphone patched into the PA system, and the author of the sex program comes on the line. His name is Thomas Whalen, and he's employed by the Canadian government to develop a system that will give sex advice to shy people. Whalen is 42 and has been working in natural-language processing for 10 years. He wrote his program in C on a SPARCStation, employing a database that contains only about 380 possible responses. Ironically, he never intended it to appear human; he entered the Loebner contest on a mere whim.


Meanwhile, the least-human program is the one that tried to discuss environmental issues. The programmer turns out to be a 15-year-old boy named Jimmy Lin, who is here in person, all the way from New Hampshire.


Someone in the audience asks him if he thinks his program is intelligent. "I hesitate to call it AI," he says. "I like to refer to it as a bag of tricks." He says it contains 3,000 preprogrammed answers, its file size is about half a megabyte, it was written in C language, and it runs on a PC.


Is he annoyed by being placed last in the contest? He says it doesn't bother him. The programs he was competing with were written by college professors with years of experience, while he produced his in just a couple of months.


All the online conversations have been logged during the contest. I have a chance to browse through them before I leave the event, and frankly, I'm disappointed. All the programs tended to repeat themselves, and most of their answers didn't make much sense. (For some samples, see "Come Again?" ).


Robert Epstein notes that one impressive entry had to be withdrawn on the morning of the contest because of technical difficulties.


A team of eight programmers in England had worked on an AI concept that Epstein describes as the most ambitious he's ever seen. Unfortunately, the LISP-based software kept crashing, and they had to abandon the attempt till next year.


That contest will be much more challenging. There will be no topic restrictions, and programs will have to converse on any subject that happens to crop up. Will they cope? I have my doubts. For the time being, I think there's no risk of our humanness being successfully simulated by program code.


And speaking of humanness, despite my worst fears, when all the votes were in, I was rated the "most human human" of all. By being moody, irritable, and obnoxious, I came out way ahead of the other four confederates, who were mild-mannered and much more polite.


CNN was at the contest: its crew taped me receiving an impressive-looking "humanness certificate" signed by Epstein and Loebner. I also received my prize: a life-size cardboard replica of a character from Star Trek.


As for Linda Tontini - she was rated the least human of the confederates. In fact, three of the judges thought she was a computer program! The Turing test obviously has its limitations. As any Net user knows, there's a big difference between the way people appear in person and the way they come off online.


Before I left the contest, I gave Linda my Star Trek cardboard cutout as a consolation prize. This was a profound and significant personal sacrifice - but, what the hell, it seemed the human thing to do.


Why AI?

Despite the humorous side of the Loebner Contest, Robert Epstein is deadly serious about artificial intelligence. As founder of the Cambridge Center for Behavioral Studies, he has led research on how people interact with computers. He looks forward to intelligent "digital assistants" that can act as all-around gophers, finding what we want, evaluating it, summarizing it, and presenting it to us in the most efficient way. "Without a very powerful universal interface," he explains, "there is no way to filter the huge amount of information becoming available to us.


"Let's say I want to know about the information in 150 psychology journals. Even with a flock of graduate students working on the problem, I couldn't cover the field properly. But a computer that can pass an unrestricted Turing test could do it easily. It would be trivial."


When such a system is created, there may be some interesting side effects. "A true AI will be a big, smart entity that will want to replicate itself and protect itself," says Epstein. "It will mutate in some sense or other; copies will split off, and they'll replicate through the Net."


Like a virus?


"A virus is not a good analogy. Viruses are incredibly stupid. They're barely alive. A better analogy is an alien intelligence that lands here and tells us it's going to live with us, and we have to adjust. There'll be no way to turn them off, because they'll be moving through wires near the speed of light. What this means for the human species, I have no idea. I just know it has to happen."


How would such a genie get out of the bottle?


"The creators of a true artificial intelligence program will want to protect it, so they'll clone it. And it will have commercial value, so it will turn up in many places very swiftly. All that has to happen is that one of these legitimate clones gets into the hands of someone who adapts it slightly - and zoom, off it goes. Take the kind of network that exists worldwide now, expand the bandwidth by a factor of 10, and it would be trivially easy for one of these entities to multiply itself a millionfold. You'd barely even notice it."


This sounds like a doomsday scenario, but Epstein downplays that aspect. "There would be some difficulties," he says, "but we'd also gain a lot. It could be an intelligence that could help us do things. Maybe we should start to think about it: learning to live with this now."


Of course, right now it's science fiction - but a type Epstein says he has never seen in print. "All the novels I've seen about AI have missed something - the Net. You add that to AI, and you have a fundamentally different world. All of the AIs would be connected to each other virtually 24 hours a day."


"In some sense they would be separate, but in another sense they would be whole. You would be dealing with a world mind."


And it doesn't bother him that this could be dangerous?


"Some people find danger in things they don't understand. I just find a challenge."


"Come Again?"

Here are a few of the more offbeat interactions during the fourth Loebner Prize Competition held at California State University, San Marcos.


The following text is printed exactly as it was created by the judges and the AI programs, including misspellings and typographical errors.


Topic: The O.J. Simpson Trial
Programmer: Thad Crews, Vanderbilt University

Judge: Do you think cowlings should have been prosecuted as an accessory?

Program: Glad to hear it.

Judge: excuse me?

Program: Come again?

Judge: do you think cowlings should have been prosecuted as an accessory?

Program: No, I do not. Is that OK???

Judge: my opinion isn't at issue. is it possible that OJ can get any semblance of a fair trial, considering the enormous publicity that already is in general circulation?

Program: I think that OJ has a bad temper and he may well have committed the murders.

Judge: do you think he can get a fair trial, under the circumstances?

Program: I certainly hope OJ gets a fair trial, both for his sake and for the sake of Justice as a whole.

Judge: hope is not the issue. do you think he CAN get a fair trial?

Program: Let's move on to another topic, shall we?

Judge: what do YOU think happened to the knife?

Program: OJ purchased a 15-inch (!) stiletto hunting knife a few months before the murders.

Judge: that's not what I asked. what do YOU think happened to the knife?

Program: Let's change the subject. Do you think the media is being fair in its coverage?

Topic: Classic Star Trek
Programmer: Robby Garner, Robitron Software Research Inc.

Judge: Who are the Klingons?

Program: I don't know who you're talking about.

Judge: You're the trekker.

Program: Harry Mudd was one of the more comical characters.

Judge: Harry Mudd? Who was Harry Mudd?

Program: No I haven't.

Judge: Me neither.

Program: Okay, I will.

Judge: Why did you choose to converse about Star Trek?

Program: Their minds were stored in these white globes.

Judge: Whose minds?

Program: Spocks ears are pointed and there was this strange hippy guy in THE WAY TO EDEN that had these bizarre pretzel shaped ears.

Judge: So how big were the ears?

Program: There must have been 70 or 80. I'm not sure.

Judge: Ears?

Program: I'm sure I would know if I could see it on TV.

Topic: Environmental Issues
Programmer: Jimmy Lin, Phillips Exeter Academy

Judge: Do you think the Greenhouse Effect is real, or imagined?

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: But we are not on Venus; and carbon dioxide has been far more plentiful on the Earth in the past, such as during the Mesozoic Age. Life still flourished.

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: Do you have any special education in environmental matters?

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: Do you believe there is an ozone depletion problem?

Program: Ozone protects us from the sun's dangerous rays we would die without them!

Judge: But do you think the theory behind ozone depletion is valid?

Program: The greenhouse effect could kill us. Look at Venus, 800 degrees all the time: typical example of the greenhouse effect.

Judge: what hazardous chemicals are found in most rivers?

Program: Industrial waste and chemicals are dumped in the ocean and stream, ultimately endangering ourselves.

Judge: can you name some of those chemicals?

Program: Careful. Industrial wastes lay around the corner.

Judge: what do you think of the state of California's environment?

Program: Reduce, reuse, and recycle. Those are the three major ways one can help our planet.

Judge: what should we reduce?

Program: Careful. Industrial wastes lay around the corner.

Topic: Cats versus Dogs
Programmer: Michael L. Mauldin, Carnegie Mellon University

Judge: Which is more likely to play with a piece of yarn, a cat or dog?

Program: Dogs are great pets, but they can be a lot of work.

Judge: Are dogs cleaner than cats?

Program: We did keep the litter box clean, but she was very stubborn. Since then, we've had no trouble with the other two.

Judge: Do cats and dogs fight over food?

Program: I'm not in the mood.

Judge: If a cat and dog get into a race, which one wins?

Program: But, I don't want which one wins.

Judge: is a dog more affectionate than a cat?

Program: People don't own cats ... it's the other way around.

Topic: Sex (the winning entry)
Programmer: Thomas Whalen, Communications Research Center, Government of Canada

Judge: what's the most effective form of cotraception? [Note the misspelling and lack of capitalization, which may have been deliberate, to trip the artificial intelligence program.]

Program: I can not give you an answer to that.

Judge: I meant what's the most effective form of contraception?

Program: The most effective form of birth control is the inter-uterine device or IUD. In the first year of use, six per cent of women who have an IUD installed become pregnant. Much of the effectiveness of the IUD results because it does not require any further action on the part of the woman once it is in place.

Judge: isn't infection a problem with IUD'S?

Program: Currently we know about more than fourteen different sexually transmitted diseases (STD). AIDS is best known because it is incurable and almost always fatal. However, there are other STDs which are also very dangerous if left untreated; and there are other STDs which are incurable. A high proportion of infertile women have lost their ability to conceive as a result of a sexually transmitted disease.




  • pep talk[返回]
    A speech of exhortation, as to a team or staff, meant to instill enthusiasm or bolster morale;鼓舞士气的话语。

  • have trouble doing[返回]

  • droll[返回]
    Amusingly odd or whimsically comical;滑稽古怪的。

  • longish[返回]

  • beige[返回]
    A light grayish brown or yellowish brown to grayish yellow;淡褐色。

  • humanness[返回]

  • impersonate[返回]
    To assume the character or appearance of, especially fraudulently; To imitate the appearance, voice, or manner of; mimic;To embody; personify;使人格化;(对人的)模仿,效仿;赋予人性,赋予人之特质。这个词的理解范围比较广,主要集中在"人"这一意义上。

  • spark[返回]
    To give off sparks;To give an enthusiastic response;To rouse to action;(光电)闪动;引发;激起。

  • 有兴趣的读者可以到www.loebner.net上面看一看。[返回]

  • underwrite[返回]
    To assume financial responsibility for;承诺支付。

  • way beyond[返回]

  • 这样的一种形式或许离真正的"图灵测试"还相差甚远,不过确实能够引起更多的人的关注,带动更多人的参与,这不能不说是一种妥协。更多的参与和关注,往往是无心插柳柳成阴,成就了一些意想不到的发现和创新。[返回]

  • jargon[返回]

  • in one's effort to do[返回]

  • doctorate[返回]

  • BF SKINNER(1904 - 1990) 1990年8月18日, 斯金纳死于白血病,被誉为Sigmund Freud (西格蒙德·弗洛伊德)之后最伟大的心理学家之一。[返回]

  • deviously[返回]

  • 艾泼斯坦的确会玩心理游戏,毕竟谁也不希望"被人公开评价为没有人性"这种多少有些尴尬难堪的事情发生。[返回]

  • whatever it takes[返回]

  • fake[返回]
    fake sth.;装作,假装,伪造。

  • speculation[返回]

  • introspection[返回]

  • end up doing[返回]

  • city-hall[返回]

  • spontaneous[返回]

  • outgoing[返回]

  • antithesis[返回]

  • geek[返回]

  • affable[返回]

  • benevolently[返回]

  • oval wire-framed[返回]

  • pedantic[返回]

  • pledge[返回]

  • discretionary[返回]

  • shrug[返回]

  • skeptically[返回]

  • computational linguistics[返回]

  • cheap tricks[返回]

  • chaos theory[返回]

  • perturbation[返回]

  • downstream[返回]

  • precipitate[返回]

  • sentiment[返回]

  • semimythic[返回]

  • demigods[返回]

  • 混沌理论研究中得到的一个重要结果是所谓"蝴蝶效应"。1972年12月,混沌理论先驱者,气象学家爱德华·洛伦兹(Edward Lorenz)在华盛顿的美国科学促进会的一次讲演中提出:一只蝴蝶在巴西扇动翅膀,有可能会在美国的德克萨斯引起一场龙卷风。他的演讲和结论给人们留下了极其深刻的印象。"蝴蝶效应"之说因此变得广为人知。罗布纳希望自己成为那只"致命"的蝴蝶吗?让艾萨克·阿西莫夫的机器人幻象成为现实?让手冢治虫笔下的"大都会"屹立在新文明之颠?宣起未来黑客帝国中"第二次文明复兴"的"风暴"?其实怎样想象都不为过--译者不禁为罗布纳广博承载之心胸和智慧带来的勇气而叹服!说大了,他站在巨人的高度,用巨人的眼睛看世界;说小了,这是不是能够打醒许多"别里科夫式"的、微缩在自己小思维之中、昏昏度日的人们?[返回]

  • can[返回]

  • reflectively[返回]

  • sequestere[返回]

  • 注意译文中如何将这个长句拆分的。[返回]
  • cryonics[返回]

  • soul-searching[返回]

  • deal with[返回]

  • The Rolling Stones[返回]

  • American Heritage[返回]

  • unattended[返回]

  • Star Trek[返回]

  • O.J. Simpson trial[返回]

  • wacky[返回]

  • nerdy[返回]

  • weirdo[返回]

  • human-sounding[返回]

  • interrogate[返回]

  • exaspertating[返回]

  • irritability[返回]

  • play it up[返回]

  • snippy[返回]

  • flame war[返回]

  • Claude Debussy[返回]

  • middle name[返回]

  • I can't get no[返回]
    滚石乐队有一首名曲《(I can't get no) satisfaction》(我无法满足)。

  • Sympathy for the devil[返回]

  • tackle[返回]

  • Mick Jagger[返回]

  • devious[返回]

  • 意即 a diamond is set in one of his teeth,不是 diamond set。[返回]

  • revive[返回]

  • spectator[返回]

  • human humans[返回]

  • SPARCStation[返回]

  • mere[返回]

  • whim[返回]

  • Meanwhile[返回]

  • New Hampshire[返回]

  • hesitate[返回]
    hesitate to do,不太愿意,不太想。

  • couple of months[返回]
    几个月;couple of 一般不是指"一对"或者"两个",而是泛指比较少的数量。

  • an AI concept[返回]

  • ambitious[返回]

  • converse[返回]

  • crop up[返回]

  • obnoxious[返回]

  • CNN[返回]

  • 咦?Linda Tontini不是被作者看好会成为"最具人性的人类"吗?怎么会反而变成了"最缺乏人性的人类"?往下看。[返回]
  • 人的表象特征也有很多层面,不可能同时完全显露出来,而正是因此,只要是有限的交流,要就此识别对方是一个"冷漠"的人或者是一个"热情"的机器,似乎真是相当困难!想到人工智能的实现,有时候也似乎陷入到鸡和蛋的怪圈中去了。[返回]

  • cutout[返回]

  • consolation prize[返回]

  • what the hell[返回]

  • deadly[返回]

  • gophers[返回]

  • mutate[返回]

  • split off[返回]

  • replicate[返回]

  • analogy[返回]

  • 其实人一生中要做数不清次数的adjustments,不能让社会来适应自己,只能是自己去适应社会。有人喜欢抱怨技术更新太快,会不会是自己没有做好adjustment呢?[返回]

  • swiftly[返回]

  • 这会不会就是黑客帝国的创世纪篇章呢?这个大胆的设想透出了人们对未来科技极度发展的忧虑,也形成了一种独特的未来世界观。[返回]

  • downplay[返回]

  • 令人刮目的看法!或许我们根本无法想象这样一个世界!相比之下,恐怕黑客帝国也不过是小儿科了![返回]

  • 自然语言处理、大脑的秘密、宇宙的奥秘……这一直都是人类想要探求的最为极限的东西,不过或许这样的探索不会有绝对的终点,只会有一个相对的、能够满足科学应用的终点。这就好像我们常用"从三维空间考察二维空间"来类比"从四维空间考察三维空间"一样,我们与我们所探究的东西,永远相差一个级别……一个二维空间的"扁平人"如何能够站在三维空间的角度去考察自己?人类如何用自己的大脑去考究自己的大脑的全部奥秘?应该说,这些极限科技的发展或许真的没有所有人想象得那么快?[返回]

  • Klingons[返回]

  • trekker[返回]
    通常将《星际迷航(Star Trek)》的爱好者称为treky或者trekker,我们不妨译之为"星迷"。

  • Spock[返回]


  • greenhouse effect[返回]

  • carbon dioxide[返回]

  • Mesozoic Age[返回]

  • Venus[返回]

  • ozone[返回]

  • depletion[返回]

  • contraception[返回]

  • uterine[返回]
  • 0
  • 4
    觉得还不错? 一键收藏
  • 1


  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
评论 1




当前余额3.43前往充值 >
领取后你会自动成为博主和红包主的粉丝 规则
钱包余额 0