TED演讲:如何赋予AI力量而不是被它压倒(5)
WordPress数据库错误: [Got error 28 from storage engine]
SELECT SQL_CALC_FOUND_ROWS wp_posts.*, yarpp.score
FROM wp_posts join wp_yarpp_related_cache as yarpp on wp_posts.ID = yarpp.ID
WHERE 1=1 AND yarpp.score >= 1 and yarpp.reference_ID = 89688 AND wp_posts.post_type = 'post'
ORDER BY score DESC, wp_posts.ID ASC
limit 10
We invented the car, screwed up a bunch of times — invented the traffic light, the seat belt and the airbag,
我们发明了汽车,又一不小心搞砸了很多次–发明了红绿灯,安全带和安全气囊,
but with more powerful technology like nuclear weapons and AGI, learning from mistakes is a lousy strategy, don’t you think?
但对于更强大的科技,像是核武器和AGI,要去从错误中学习,似乎是个比较糟糕的策略,你们怎么看?
It’s much better to be proactive rather than reactive;
事前的准备比事后的补救要好得多;
plan ahead and get things right the first time because that might be the only time we’ll get.
提早做计划,争取一次成功,因为有时我们或许没有第二次机会。
But it is funny because sometimes people tell me, “Max, shhh, don’t talk like that. That’s Luddite scaremongering.”
但有趣的是,有时候有人告诉我。“麦克斯,嘘,别那样说话。那是勒德分子在制造恐慌。”
But it’s not scaremongering. It’s what we at MIT call safety engineering.
但这并不是制造恐慌。在麻省理工学院,我们称之为安全工程。
Think about it: before NASA launched the Apollo 11 mission,
想想看:在美国航天局(NASA)部署阿波罗11号任务之前,
they systematically thought through everything that could go wrong
他们全面地设想过所有可能出错的状况,
when you put people on top of explosive fuel tanks and launch them somewhere where no one could help them.
毕竟是要把人类放进易燃易爆的太空舱里,再将他们发射上一个无人能助的境遇。
And there was a lot that could go wrong. Was that scaremongering? No.
可能出错的情况非常多,那是在制造恐慌吗?不是。
That’s was precisely the safety engineering that ensured the success of the mission,
那正是在做安全工程的工作,以确保任务顺利进行,
and that is precisely the strategy I think we should take with AGI.
这正是我认为处理AGI时应该采取的策略。
Think through what can go wrong to make sure it goes right.
想清楚什么可能出错,然后避免它的发生。
So in this spirit, we’ve organized conferences,
基于这样的精神,我们组织了几场大会,
bringing together leading AI researchers and other thinkers to discuss how to grow this wisdom we need to keep AI beneficial.
邀请了世界顶尖的人工智能研究者和其他有想法的专业人士,来探讨如何发展这样的智慧,从而确保人工智能对人类有益。
Our last conference was in Asilomar, California last year and produced this list of 23 principles
我们最近的一次大会去年在加州的阿西洛玛举行,我们得出了23条原则,
which have since been signed by over 1,000 AI researchers and key industry leaders,
自此已经有超过1000位人工智能研究者,以及核心企业的领导人参与签署,
and I want to tell you about three of these principles.
我想要和各位分享其中的三项原则。
许多人工智能研究人员预计,在未来几十年内,人工智能将在所有任务和工作中超越人类,从而使我们的未来只受到物理定律的限制,而不是我们的智力极限。麻省理工学院物理学家和人工智能研究员马克斯·特格马克把真正的机会和威胁从神话中分离出来,描述了我们今天应该采取的具体步骤,以确保人工智能最终成为人类有史以来发生的最好的,而不是最坏的事情。