TED演讲:如何构建对我们有利的人工智能(3)
I realized that, as I worked on improving AI task by task, dataset by dataset,
我了解到,当我努力在改善人工智能,一个任务一个任务、一个数据集一个数据集地改善,
that I was creating massive gaps, holes and blind spots in what it could understand.
结果我却在它能了解什么上创造出了大量的隔阂、漏洞以及盲点。
And while doing so, I was encoding all kinds of biases.
这么做的时候,我是在把各种偏见做编码。
Biases that reflect a limited viewpoint, limited to a single dataset
这些偏见反映出受限的观点,受限于单一数据集,
biases that can reflect human biases found in the data, such as prejudice and stereotyping.
这些偏见能反应出在数据中的人类偏见,比如偏袒以及刻板印象。
I thought back to the evolution of the technology that brought me to where I was that day
我回头去想一路带我走到那个时点的科技演化,
how the first color images were calibrated against a white woman’s skin,
第一批彩色影像如何根据一个白种女子的皮肤来做校准,
meaning that color photography was biased against black faces.
这表示,彩色照片对于黑皮肤脸孔是有偏见的。
And that same bias, that same blind spot continued well into the ’90s.
同样的偏见,同样的盲点,持续涌入了20世纪90年代。
And the same blind spot continues even today in how well we can recognize different people’s faces in facial recognition technology.
而同样的盲点甚至持续到现今,出现在我们对于不同人的脸部辨识能力中,在人脸辨识技术中。
I thought about the state of the art in research today, where we tend to limit our thinking to one dataset and one problem.
我思考了现今在研究上发展水平,我们倾向会把我们的思路限制在一个数据集或一个问题上。
And that in doing so, we were creating more blind spots and biases that the AI could further amplify.
这么做时,我们就会创造出更多盲点和偏见, 它们可能会被人工智能给放大。
作为谷歌的研究科学家,玛格丽特·米切尔帮助开发电脑,它们能够沟通所看到和理解的事情。她警示,如今我们潜意识地将差距,盲点和偏见编码到人工智能中——我们应该考虑今天创造的技术对未来意味着什么。米切尔说:“我们现在所看到的是人工智能进化过程中的一个快照。如果我们希望人工智能以一种帮助人类的方式发展,那么我们需要定义目标和策略,来开通这这条路径。”