Powerful artificial intelligence (AI) needs to be reliably aligned with human values, but does this mean AI will eventually have to police those values?
强大的人工智能(AI)需要与人类的价值观达成可靠的一致,但这是否意味着人工智能最终将不得不去维护这些价值观呢?
This has been the decade of AI, with one astonishing feat after another. A chessplaying AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That's yesterday's news, what's next? True, these prodigious accomplishments are all in socalled narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle.
这十年是人工智能的辉煌时代,各种惊人的成就接踵而至。有一款会下棋的人工智能,它不仅能够击败所有人类棋手,还能战胜之前所有由人类编程的棋类机器,而且这一切仅用了四小时来学习棋局规则?这已经是过去的事了,接下来会怎样呢?诚然,这些非凡的成就都属于所谓的“狭义人工智能”,即机器专门从事高度专业化的任务。但许多专家认为这种限制是暂时的。到本世纪中叶,我们或许会拥有通用人工智能(AGI)——即能够像人类一样在我们自身能够处理的所有任务上达到同等水平表现的机器。
If so, there's little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope.
如果是这样的话,就没有理由认为这种趋势会就此停止。机器将摆脱许多制约人类智力的物理限制。我们的大脑在由灯泡提供的能量驱动下以缓慢的生化处理速度运转,其大小也受到人类产道尺寸的限制。尽管存在这些缺陷,但它们所取得的成就依然令人惊叹。要知道,它们与我们的眼睛相比,就如同人类与功能极其强大的韦伯空间望远镜相比一样,都远远超出了思维的物理极限。
Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines? On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, might have wished that everything he touched turned to gold, but didn't really intend this to apply to his breakfast.
一旦机器在设计更智能机器方面的能力超越了人类,那么朝着这些极限迈进的步伐可能会加快。这对我们意味着什么呢?我们能否确保与这类机器实现安全且有意义的共存呢?从积极的方面来看,人工智能已经在许多方面发挥了作用并带来了收益,而超级人工智能或许也会被认为具有极大的实用性和盈利能力。但随着人工智能变得越强大,就需要更加谨慎地明确其目标。民间传说中充满了人们寻求错误目标的故事,结果往往会造成灾难性的后果——比如迈达斯国王可能希望他所接触的一切都变成黄金,但却并没有真正打算将这一效果应用于他的早餐。
So we need to create powerful AI machines that are 'human-friendly' – that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. If superintelligent machines don't do a lot better than us, we'll be in deep trouble. We'll have powerful new intelligence amplifying the dark sides of our own fallible natures.
因此,我们需要研发出“人性化”的强大人工智能机器——这些机器的目标必须与我们的价值观高度一致。这项任务之所以困难,是因为我们自身远未做到完全人性化。我们常常对彼此以及与我们共享这个星球的其他生物做出极其恶劣的行为。如果超级智能机器的表现还不如我们,那我们将会陷入极大的困境。我们将拥有强大的新智能,而这种智能会放大我们自身易犯错误的本性中的阴暗面。
For safety's sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they'll be smart enough for the job. If there are routes to the moral high ground, they'll be better than us at finding them, and steering us in the right direction.
出于安全考虑,我们希望这些机器在道德层面以及认知能力上都要超越人类。我们希望它们能够追求道德的高尚境界,而非像我们许多人那样长期处于低谷状态。幸运的是,它们足够聪明能够胜任这项工作。如果存在通往道德高尚境界的途径,它们会比我们更擅长找到这些途径,并引导我们朝着正确的方向前进。
However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The 'getting started' problem is that we need to tell the machines what they're looking for with sufficient clarity that we can be confident they will find it – whatever 'it' actually turns out to be. This won't be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers, and even contribute to it, at least indirectly. How then, do we point machines in the direction of something better?
然而,这种理想化的设想存在两个重大问题。一是如何让机器开启这段旅程,二是要达到这一目标意味着什么。关于“开启旅程”的问题在于,我们需要向机器清晰地说明它们要寻找的东西,以便我们能够确信它们会找到它——无论“它”最终到底是什么。鉴于我们是群体性生物,而且自身对于理想也存在矛盾的看法,所以要让机器明确目标并非易事。我们常常忽视陌生人的痛苦,甚至还会间接地促成这种痛苦。那么,我们又该如何引导机器朝着更好的方向前进呢?
As for the 'destination' problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy – an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities, for example.
至于“目标”问题,如果我们完全听从这些道德导师和把关人的指导,可能会牺牲我们自身的自主性——这是构成我们人性的重要部分。那些在坚守道德高地方面比我们更出色的人工智能,或许能够减少我们目前习以为常的一些失误行为。例如,我们可能会失去对自身社群进行优先选择的自由。
当然,失去肆意作恶的自由并不总是件坏事:禁止我们让儿童在工厂工作,或者禁止我们在餐馆吸烟,这些都是进步的标志。但我们是否做好了接受道德化的“硅警察”来限制我们选择的准备呢?他们或许会做得如此出色以至于我们都不会察觉;但我们当中很少有人会欢迎这样的未来。
Loss of freedom to behave badly isn't always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police limiting our options? They might be so good at doing it that we won't notice them; but few of us are likely to welcome such a future.
这些问题或许看似有些牵强,但在某种程度上却已经存在了。例如,在英国,人工智能已经在一定程度上影响着我们国家医疗服务体系(NHS)中资源的使用方式。如果赋予它更大的作用,它或许能比人类更高效地完成任务,并且能以维护纳税人利益和医疗服务使用者利益的方式行事。然而,我们将会剥夺一些人(比如资深医生)目前所享有的控制权。由于我们希望确保人们得到平等对待并且政策是公平的,所以人工智能的目标需要被准确地确定下来。
These issues might seem far-fetched, but they are to some extent already here. AI already has some input into how resources are used in our National Health Service (NHS) here in the UK, for example. If it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. However, we'd be depriving some humans (e.g. senior doctors) of the control they presently enjoy. Since we'd want to ensure that people are treated equally and that policies are fair, the goals of AI would need to be specified correctly.
我们面临一项全新的强大技术需要应对——确切地说,这是一种全新的思维方式。为了保障我们的自身安全,我们需要引导这些新的思考者朝着正确的方向前进,并促使他们为我们做出良好的行为。目前尚不清楚这是否可行,但如果可行的话,这将需要一种合作精神,以及放弃个人利益的意愿。
We have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest.
通常人们都认为一般智力和道德推理是人类独有的能力。但安全问题似乎要求我们将这两者视为一个整体来看待:如果我们要将一般智力赋予机器,那么我们也必须赋予它们道德权威。那么这究竟会让人类处于何种境地呢?现在更有理由去思考最终的目标,并谨慎对待我们所期望的事情。
Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we'll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for.
Choose the correct letter, A , B , C , or D .
Write the correct letter in boxes 14-19 on your answer sheet.
14 What point does the writer make about AI in the first paragraph?
15 What is the writer doing in the second paragraph?
16 Why does the writer mention the story of King Midas?
17 What challenge does the writer refer to in the fourth paragraph?
18 What does the writer suggest about the future of AI in the fifth paragraph?
19 Which of the following best summarises the writer's argument in the sixth paragraph?