TED演讲:我们能在打造人工智能的同时掌握控制权吗?(6)
WordPress数据库错误: [Got error 28 from storage engine]
SELECT SQL_CALC_FOUND_ROWS wp_posts.*, yarpp.score
FROM wp_posts join wp_yarpp_related_cache as yarpp on wp_posts.ID = yarpp.ID
WHERE 1=1 AND yarpp.score >= 1 and yarpp.reference_ID = 91801 AND wp_posts.post_type = 'post'
ORDER BY score DESC, wp_posts.ID ASC
limit 10
So what would apes like ourselves do in this circumstance?
那在这种情况下,像我们这样的“大猩猩”还能有什么用呢?
Well, we’d be free to play Frisbee and give each other massages.
我们可以悠闲地玩飞盘,给彼此做按摩。
Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.
服用一些迷药,穿一些奇装异服,整个世界都沉浸在狂欢节之中。
Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order?
那可能听起来挺棒的,不过让我们扪心自问,在现有的经济和政治体制下,这意味着什么?
It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before.
我们很可能会目睹前所未有的贫富差距和失业率。
Absent a willingness to immediately put this new wealth to the service of all humanity,
有钱人不愿意马上把这笔新的财富贡献出来服务社会,
a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.
这时一些千万富翁能够优雅地登上商业杂志的封面,而剩下的人可能都在挨饿。
And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI?
如果听说硅谷里的公司即将造出超级人工智能,俄国人和中国人会采取怎样的行动呢?
This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power.
那个机器将能够以一种前所未有的能力去开展由领土问题和网络问题引发的战争。
This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.
这是一个胜者为王的世界。机器世界中的半年,在现实世界至少会相当于50万年。
So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.
所以仅仅是关于这种科技突破的传闻,就可以让我们的种族丧失理智。
Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring.
在我的观念里,当前最可怕的东西正是人工智能的研究人员安慰我们的那些话。
And the most common reason we’re told not to worry is time.
最常见的理由就是关于时间。
This is all a long way off, don’t you know.
他们会说,现在开始担心还为时尚早。
This is probably 50 or 100 years away.
这很可能是50年或者100年之后才需要担心的事。
One researcher has said, “Worrying about AI safety is like worrying about overpopulation on Mars.”
一个研究人员曾说过,“担心人工智能的安全性就好比担心火星上人口过多一样。”
This is the Silicon Valley version of “don’t worry your pretty little head about it.”
这就是硅谷版本的“不要杞人忧天。”