TED演讲:如何赋予AI力量而不是被它压倒(6)

One is that we should avoid an arms race and lethal autonomous weapons.

第一,我们需要避免军备竞赛,以及致命的自动化武器出现。

The idea here is that any science can be used for new ways of helping people or new ways of harming people.

其中的想法是,任何科学都可以用新的方式来帮助人们,同样也可以以新的方式对我们造成伤害。

For example, biology and chemistry are much more likely to be used for new medicines or new cures than for new ways of killing people,

例如,生物和化学更可能被用来制造新的医药用品,而非带来新的杀人方法,

because biologists and chemists pushed hard — and successfully — for bans on biological and chemical weapons.

因为生物学家和化学家很努力,也很成功地,在推动禁止生化武器的出现。

And in the same spirit, most AI researchers want to stigmatize and ban lethal autonomous weapons.

基于同样的精神,大部分的人工智能研究者也在试图指责和禁止致命的自动化武器。

Another Asilomar AI principle is that we should mitigate AI-fueled income inequality.

另一条阿西洛玛人工智能会议的原则是,我们应该要减轻由人工智能引起的收入不平等。

I think that if we can grow the economic pie dramatically with AI

我认为,我们能够大幅度利用人工智能发展出一块经济蛋糕,

and we still can’t figure out how to divide this pie so that everyone is better off, then shame on us.

但却没能相处如何来分配它才能让所有人受益,那可太丢人了。

Alright, now raise your hand if your computer has ever crashed.

那么问一个问题,如果你的电脑有死机过的,请举手。

Wow, that’s a lot of hands. Well, then you’ll appreciate this principle that we should invest much more in AI safety research

哇,好多人举手。那么你们就会感谢这条准则,我们应该要投入更多以确保对人工智能安全性的研究,

because as we put AI in charge of even more decisions and infrastructure,

因为我们让人工智能在主导更多决策以及基础设施时,

we need to figure out how to transform today’s buggy and hackable computers into robust AI systems that we can really trust,

我们要了解如何将会出现程序错误以及有漏洞的电脑,转化为可靠的人工智能,

because otherwise, all this awesome new technology can malfunction and harm us, or get hacked and be turned against us.

否则的话,这些了不起的新技术就会出现故障,反而伤害到我们,或被黑入以后转而对抗我们。

And this AI safety work has to include work on AI value alignment,

这项人工智能安全性的工作必须包含对人工智能价值观的校准,

because the real threat from AGI isn’t malice, like in silly Hollywood movies, but competence

因为AGI会带来的威胁通常并非出于恶意,就像是愚蠢的好莱坞电影中表现的那样,而是源于能力,

AGI accomplishing goals that just aren’t aligned with ours.

AGI想完成的目标与我们的目标背道而驰。

For example, when we humans drove the West African black rhino extinct,

例如,当我们人类促使了西非的黑犀牛灭绝时,

we didn’t do it because we were a bunch of evil rhinoceros haters, did we?

并不是因为我们是邪恶且痛恨犀牛的家伙,对吧?

We did it because we were smarter than them and our goals weren’t aligned with theirs.

我们能够做到只是因为我们比它们聪明,而我们的目标和它们的目标相违背。

But AGI is by definition smarter than us,

但是AGI在定义上就比我们聪明,

so to make sure that we don’t put ourselves in the position of those rhinos if we create AGI,

所以必须确保我们别让自己落到了黑犀牛的境遇,如果我们发明AGI,

we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.

首先就要解决如何让机器明白我们的目标,选择采用我们的目标,并一直跟随我们的目标。

许多人工智能研究人员预计,在未来几十年内,人工智能将在所有任务和工作中超越人类,从而使我们的未来只受到物理定律的限制,而不是我们的智力极限。麻省理工学院物理学家和人工智能研究员马克斯·特格马克把真正的机会和威胁从神话中分离出来,描述了我们今天应该采取的具体步骤,以确保人工智能最终成为人类有史以来发生的最好的,而不是最坏的事情。

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据