英語流利說 懂你英語 Level 8 Unit 1 Part 2 : On Controlling AI

Sam Harris: Can we build AI without losing control over it?

TEDSummit ??14:27?? Posted?September 2016

L8-U1-P2-1? On Controlling AI 1

I'm going to talk about a failure of intuition that many of us suffer from.

It's really a failure to detect a certain kind of danger.

I'm going to describe a scenario that I think is both terrifying and likely to occur,

and that's not a good combination, as it turns out.

And yet rather than be scared, most of you will feel that what I'm talking about is kind of cool.

I'm going todescribe how thegains we make in artificial intelligence couldultimately destroy us.

And in fact,I think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves.

And yet if you're anything like me, you'll find that it's fun to think about these things.

And that response is part of the problem. OK? That response should worry you.

And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe,

and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn't think, "Interesting. I like this TED Talk."

Famine isn't fun. Death by science fiction, on the other hand, is fun,

and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal

an appropriate emotional response to the dangers that lie ahead.

I am unable to marshal this response, and I'm giving this talk.

What feelings does Harris have about the dangers of AI that lie ahead?He believes humans don't treat them seriously enough.

Artificial Intelligence, or AI, is intelligence exhibited by a machine.

What is Harris's talk about?The dangers that the AI poses to humanity.

to show how pop culture makes AI seem cool.

I'm going todescribehow thegainswe make in artificial intelligence couldultimatelydestroyus.

L8-U1-P2-2 : On Controlling AI 2

It's as though we stand before two doors.

Behind door number one, we stop making progress in building intelligent machines.

Our computer hardware and software just stops getting better for some reason.

Now take a moment to consider why this might happen.

I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to.

What could stop us from doing this?

A full-scale nuclear war?

A global pandemic?

An asteroid impact?

Justin Bieber becoming president of the United States?

The point is, something would have to destroy civilization as we know it.

You have toimagine how bad itwould have to be toprevent us from makingimprovements in our technology permanently, generationafter generation.

Almost by definition, this is the worst thing that's ever happened in human history.

So the only alternative, and this is what lies behind door number two,

is that we continue to improve our intelligent machines year after year after year.

At a certain point, we will build machines that are smarter than we are,

and once we have machines that are smarter than we are, they will begin to improve themselves.

And then we risk what the mathematician IJ Good called an "intelligence explosion," that the process could get away from us.

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us.

But that isn't the most likely scenario.

It's not that our machines will become spontaneously malevolent.

The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Just think about how we relate to ants.

We don't hate them. We don't go out of our way to harm them.

In fact, sometimes we take pains not to harm them. We step over them on the sidewalk.

But whenever their presence seriously conflicts with one of our goals, let's say when constructing a building like this one,

we annihilate them without a qualm.

The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.

Why does Harris say there are only two possibilities moving forward?He thinks humanity will either be wiped out or continue to progress.

To be malicious is to be?...cruel.

You have toimaginehow bad itwouldhave to be topreventus from makingimprovementsin our technology permanently, generationaftergeneration.

If you're anything like me, you'll find that it's fun to think about these things.

I think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves.

I'm going to describe a scenario that I think is both terrifying and likely to occur,? and that's not a good combination.

I'm going to?describe how the?gains we make in artificial intelligence could?ultimately destroy us.? And in fact,?I think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves.

Now, I suspect this seems far-fetched to many of you.?

I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable.?

But then you must find something wrong with one of the following assumptions.?

And there are only three of them.

Intelligence is a matter of information processing in physical systems.?

Actually, this is a little bit more than an assumption.?

We have already built narrow intelligence into our machines,?and many of these machines perform at a level of superhuman intelligence already.?

And we know that mere matter can give rise to what is called "general intelligence," an ability to think flexibly across multiple domains?

because our brains have managed it. Right??

I mean, there's just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior,

we will eventually, unless we are interrupted, we will eventually build general intelligence into our marchines.

It's crucial to realize that the rate of progress doesn't matter?

because any progress is enough to get us into the end zone.?

We don't need Moore's law to continue.?

We don't need exponential progress.?

We just need to keep going.

The second assumption is that we will keep going.?

We will continue to improve our intelligent machines.?

And given the value of intelligence -- I mean,?

intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource.?

So we want to do this. We have problems that we desperately need to solve.?

We want to cure diseases like Alzheimer's and cancer.?

We want to understand economic systems. We want to improve our climate science.?

So we will do this, if we can.?

The train is already out of the station, and there's no brake to pull.

Finally, we don't stand on a peak of intelligence, or anywhere near it, likely.?

And this really is the crucial insight. This is what makes our situation so precarious,?

and this is what makes our intuitions about risk so unreliable.

Now, just consider the smartest person who has ever lived.?

On almost everyone's shortlist here is John von Neumann.?

I mean, the impression that von Neumann made on the people around him,?

and this included the greatest mathematicians and physicists of his time, is fairly well-documented.?

If only half the stories about him are half true, there's no question he's one of the smartest people who has ever lived.?

So consider the spectrum of intelligence.?

Here we have John von Neumann.?

And then we have you and me.?

And then we have a chicken.?

Sorry, a chicken.

There's no reason for me to make this talk more depressing than it needs to be.

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive,?

and if we build machines that are more intelligent than we are,

they will very likely explore this spectrum in ways that we can't imagine, and exceed us in ways that we can't imagine.?

Why are users more likely to send messages to people who others don't find attractive?They think there will be less competition.

An incentive is something that motivates a person to do something.

information being processed in physical systems.

It will be far more advanced than humans can imagine.

General interlligence is...the ability to think flexibly across many domains.

If only half the stories abouthim are halftrue, there's noquestionhe's one of thesmartestpeople who has everlived.

We have problems that we desperately need to solve.

It's not totally true that the more attractive you are,? the more messages you get.

He believes humanity will either be wiped out or continue progressing.

I'm going to describe how the gains we make in artificial intelligence could ultimately destroy us.

The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.

One of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead.

We want to cure diseases like Alzheimer's and cancer.

L8-U1-P2-3 : On Controlling AI 3

And it's important to recognize that this is true by virtue of speed alone.

Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT.

Well, electronic circuits function about a million times faster than biochemical ones,

so this machine should think about a million times faster than the minds that built it.

So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.

How could we even understand, much less constrain, a mind making this sort of progress?

The other thing that's worrying, frankly, is that, imagine the best case scenario.

So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around.

It's as though we've been handed an oracle that behaves exactly as intended.

Well, this machine would be the perfect labor-saving device.

It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials.

So we're talking about the end of human drudgery.

We're also talking about the end of most intellectual work.

So what would apes like ourselves do in this circumstance?

Well, we'd be free to play Frisbee and give each other massages.

Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.

Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order?

It seemslikely that we wouldwitness a level ofwealth inequality andunemployment that we havenever seen before.

Absent a willingness to immediately put this new wealth to the service of all humanity,

a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.

And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI?

This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power.

This is a winner-take-all scenario.

To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.

So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

How all the speed of human thought compare to AI?It will be impossible for humans to operate as fast as AI.

Why does Harris say "to be six months ahead is to be 500,000 years ahead"?A superintelligent AI would develop so fast that other AI wouldn't be able to catch up.

Rumors of an AI weapon being produced could...cause humanity to go into a state of panic.

It seemslikelythat we wouldwitnessa level ofwealthinequality andunemploymentthat we haveneverseen before.

There's no reason for me to make this talk more depressing than it needs to be.

To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.

We have already built narrow intelligence into our machines,?and many of these machines perform at a level of superhuman intelligence already.??

L8-U1-P2-4 : On Controlling AI 4

Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring.

And the most common reason we're told not to worry is time.

This is all a long way off, don't you know. This is probably 50 or 100 years away.

One researcher has said, "Worrying about AI safety is like worrying about overpopulation on Mars."

This is the Silicon Valley version of "don't worry your pretty little head about it."

No one seems to notice that referencing the time horizon is a total non sequitur.

If intelligence is just amatter?of information processing, and wecontinue?toimprove?our machines, we willproduce?someform?of superintelligence.

And we have no idea how long it will take us to create the conditions to do that safely.

Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.

And if you haven't noticed, 50 years is not what it used to be. This is 50 years in months.

This is how long we've had the iPhone.

This is how long "The Simpsons" has been on television.

Fifty years is not that much time to meet one of the greatest challenges our species will ever face.

Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization,

which read: "People of Earth, we will arrive on your planet in 50 years. Get ready."

And now we're just counting down the months until the mothership lands?

We would feel a little more urgency than we do.

Another reason we're told not to worry is that

these machines can't help but share our values because they will be literally extensions of ourselves.They'll be grafted onto our brains,

and we'll essentially become their limbic systems.

Now take a moment to consider that the safest and only prudent path forward, recommended,

is to implant this technology directly into our brains.

Now, this may in fact be the safest and only prudent path forward,

but usually one's safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

The deeper problem is that building superintelligent AI on its own seems likely to be easier

than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it.

And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others,

given that to win this race is to win the world, provided you don't destroy it in the next moment,

then it seems likely that whatever is easier to do will get done first.

Why doesn't Harris think it matters that how long it will take to create an AI?As long as progress continues, AI will eventually develop.

What does Harris say about cooperation in governments trying to create AI?They are in a race to control the world

If something is reassuring, it...makes people feel less worried or frightened.

If intelligence is just amatterof information processing, and wecontinuetoimproveour machines, we willproduce?someformof superintelligence.

(1)The computer scientist Stuart Russell has a nice analogy here.

(2)He said, imagine that we received a message from an alien civilization,which read: "People of Earth, we will arrive on your planet in 50 years. Get ready."

(3)And now we're just counting down the months until the mothership lands?

(4)We would feel a little more urgency than we do.

Now,unfortunately, I don't have asolutionto thisproblem,apart from?recommending that more of us think about it.

I think we need something like a Manhattan Project on the topic of artificial intelligence.

Not to build it, because I think we'll inevitably do that,

but to understand how to avoid an arms race and to build it in a way that is aligned with our interests.

When you're talking about superintelligent AI that can make changes to itself,

it seems that we only have one chance to get the initial conditions right,

and even then we will need to absorb the economic and political consequences of getting them right.

But the moment we admit that information processing is the source of intelligence,

that some appropriate computational system is what the basis of intelligence is,

and we admit that we will improve these systems continuously,

and we admit that the horizon of cognition very likely far exceeds what we currently know,

then we have to admit that we are in the process of building some sort of god.

Now would be a good time to make sure it's a god we can live with.

Thank you very much.

How should human beings approach the problem of AI according to Harris ?They should work with each other to make sure? AI is as safe as possible.

If something is inevitable, it...is certainly to happen.

Now,unfortunately, I don't have asolution to?thisproblem,apart fromrecommending that more of us think about it.

The most common reason we're told not to worry is time.

Fifty years is not that much time to meet one of the greatest challenges our species will ever face.

If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence.

Worrying about AI safety is like worrying about overpopulation on Mars.

I don't have a?solution?to this?problem,?apart from?recommending that more of us think about it.

Now take a moment to consider that the safest and only prudent path forward, recommended,? is to implant this technology directly into our brains.

AI will operate at a rate humans could never match.? ?

I bet there are those of you who doube that superintelligent AI is possible, much less inevitable.?

Electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it.?

L8-U1-P2-test??On Controlling AI

If you're anything like me, you'll find that it's fun to think about these things.

We want to cure diseases like Alzheimer's and cancer.

We have problems that we desperately need to solve.

I'm going to describe how the gains we make in artificial intelligence could ultimately destroy us.

He believes humanity will either be wiped out or continue progressing.

I'm going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves.

One of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead.

You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation.

At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves.

I'm going to describe a scenario that I think is both terrifying and likely to occur, and that's not a good combination, as it turns out.

The train is already out of the station, and there's no brake to pull.

Massive inequality with wealth could cause humans to turn against each other.

There's no reason for me to make this talk more depressing than it needs to be.

It's as though we've been handed an oracle that behaves exactly as intended.

I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable.

We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already.

Electronic circuits function about a million time faster than biochemical ones, so this machine should think about a million times faster than the minds that built it.

The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.

It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before.

The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌郁妈,老刑警劉巖,帶你破解...
    沈念sama閱讀 206,214評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件鼓寺,死亡現(xiàn)場離奇詭異配并,居然都是意外死亡光酣,警方通過查閱死者的電腦和手機勾缭,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,307評論 2 382
  • 文/潘曉璐 我一進店門洽议,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人漫拭,你說我怎么就攤上這事』旎” “怎么了采驻?”我有些...
    開封第一講書人閱讀 152,543評論 0 341
  • 文/不壞的土叔 我叫張陵,是天一觀的道長匈勋。 經(jīng)常有香客問我礼旅,道長,這世上最難降的妖魔是什么洽洁? 我笑而不...
    開封第一講書人閱讀 55,221評論 1 279
  • 正文 為了忘掉前任痘系,我火速辦了婚禮,結(jié)果婚禮上饿自,老公的妹妹穿的比我還像新娘汰翠。我一直安慰自己,他們只是感情好昭雌,可當我...
    茶點故事閱讀 64,224評論 5 371
  • 文/花漫 我一把揭開白布复唤。 她就那樣靜靜地躺著,像睡著了一般烛卧。 火紅的嫁衣襯著肌膚如雪佛纫。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,007評論 1 284
  • 那天总放,我揣著相機與錄音呈宇,去河邊找鬼。 笑死局雄,一個胖子當著我的面吹牛甥啄,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播哎榴,決...
    沈念sama閱讀 38,313評論 3 399
  • 文/蒼蘭香墨 我猛地睜開眼型豁,長吁一口氣:“原來是場噩夢啊……” “哼僵蛛!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起迎变,我...
    開封第一講書人閱讀 36,956評論 0 259
  • 序言:老撾萬榮一對情侶失蹤充尉,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后衣形,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體驼侠,經(jīng)...
    沈念sama閱讀 43,441評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 35,925評論 2 323
  • 正文 我和宋清朗相戀三年谆吴,在試婚紗的時候發(fā)現(xiàn)自己被綠了倒源。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 38,018評論 1 333
  • 序言:一個原本活蹦亂跳的男人離奇死亡句狼,死狀恐怖笋熬,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情腻菇,我是刑警寧澤胳螟,帶...
    沈念sama閱讀 33,685評論 4 322
  • 正文 年R本政府宣布,位于F島的核電站筹吐,受9級特大地震影響糖耸,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜丘薛,卻給世界環(huán)境...
    茶點故事閱讀 39,234評論 3 307
  • 文/蒙蒙 一嘉竟、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧洋侨,春花似錦舍扰、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,240評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至吏够,卻和暖如春勾给,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背锅知。 一陣腳步聲響...
    開封第一講書人閱讀 31,464評論 1 261
  • 我被黑心中介騙來泰國打工播急, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人售睹。 一個月前我還...
    沈念sama閱讀 45,467評論 2 352
  • 正文 我出身青樓桩警,卻偏偏與公主長得像,于是被迫代替她去往敵國和親昌妹。 傳聞我的和親對象是個殘疾皇子捶枢,可洞房花燭夜當晚...
    茶點故事閱讀 42,762評論 2 345

推薦閱讀更多精彩內(nèi)容