Test 2-Passage 2:Living with artificial intelligence 纠错
查看听力原文 关闭显示原文
显示译文

Powerful artificial intelligence (AI) needs to be reliably aligned with human values, but does this mean AI will eventually have to police those values?

强大的人工智能(AI)需要与人类的价值观可靠地保持一致,但这是否意味着人工智能最终必须对这些价值观负责?

This has been the decade of AI, with one astonishing feat after another. A chessplaying AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That's yesterday's news, what's next? True, these prodigious accomplishments are all in socalled narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle.

这是人工智能的十年,一个又一个惊人的壮举接踵而至。一个会下棋的人工智能只用了四个小时就学会了下棋,不仅能打败所有人类棋手,还能打败以前所有由人类编程的国际象棋机器?这已经是昨天的新闻了,下一个是什么?诚然,这些惊人的成就都属于所谓的狭义人工智能,即机器执行高度专业化的任务。但许多专家认为,这种限制只是暂时的。到本世纪中叶,我们可能会拥有人工通用智能(AGI)--机器能在我们自己能处理的所有任务上达到人类水平。

If so, there's little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope.

如果是这样,我们就没有理由认为它会止步于此。机器将摆脱人类智能的许多物理限制。我们的大脑在灯泡的功率下以缓慢的生化处理速度运行,其大小受到人类产道尺寸的限制。在这些限制条件下,它们所取得的成就是了不起的。但它们距离思维的物理极限可能就像我们的眼睛距离威力惊人的韦伯太空望远镜一样遥远。

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines? On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, might have wished that everything he touched turned to gold, but didn't really intend this to apply to his breakfast.

一旦机器在设计更智能的机器方面比我们更胜一筹,实现这些极限的进程就会加快。这对我们意味着什么?我们能否确保与这样的机器安全、有价值地共存?从好的方面看,人工智能在很多方面已经非常有用并有利可图,超级人工智能可能会超级有用,超级有利可图。但是,人工智能变得越强大,就越需要谨慎地明确其目标。民间传说中有很多关于人们提出错误要求并导致灾难性后果的故事--例如,迈达斯国王可能希望他接触到的所有东西都变成金子,但他并不是真的想把这个愿望用在他的早餐上。

So we need to create powerful AI machines that are 'human-friendly' – that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. If superintelligent machines don't do a lot better than us, we'll be in deep trouble. We'll have powerful new intelligence amplifying the dark sides of our own fallible natures.

因此,我们需要创造出 “对人类友好 ”的强大人工智能机器--它们的目标与我们自己的价值观保持一致。让这项任务变得困难的一点是,我们自己远非可靠的 “人类友好型”。我们对彼此以及与我们共同生活在这个星球上的许多其他生物做了许多可怕的事情。如果超级智能机器不比我们做得更好,我们就会有大麻烦。我们将拥有强大的新智能,放大我们自身脆弱本性的阴暗面。

For safety's sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they'll be smart enough for the job. If there are routes to the moral high ground, they'll be better than us at finding them, and steering us in the right direction. 

因此,为了安全起见,我们希望机器在道德上和认知上都是超人。我们希望它们能够站在道德的制高点,而不是像我们许多人一样在低谷中挣扎。幸运的是,它们足够聪明,能够胜任这项工作。如果有通往道德高地的道路,他们会比我们更善于发现,并引导我们走向正确的方向。

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The 'getting started' problem is that we need to tell the machines what they're looking for with sufficient clarity that we can be confident they will find it – whatever 'it' actually turns out to be. This won't be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers, and even contribute to it, at least indirectly. How then, do we point machines in the direction of something better?

然而,这种乌托邦式的设想有两个大问题。一个是我们如何让机器开始旅程,另一个是到达这个目的地意味着什么。开始 “的问题是,我们需要足够清晰地告诉机器它们在寻找什么,这样我们才能确信它们会找到它--无论 ”它 "最终会是什么。这并不容易,因为我们是部落生物,对理想本身就有冲突。我们经常忽视陌生人的痛苦,甚至至少间接地助长了这种痛苦。那么,我们该如何把机器引向更好的方向呢?

As for the 'destination' problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy – an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities, for example. 

至于 “目的地 ”问题,我们把自己交到这些道德引导者和守门人的手中,可能会牺牲我们自己的自主权--这是我们之所以为人的重要组成部分。比我们更善于坚守道德高地的机器可能会阻止我们目前认为理所当然的一些失误。例如,我们可能会失去歧视自己群体的自由。

Loss of freedom to behave badly isn't always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police limiting our options? They might be so good at doing it that we won't notice them; but few of us are likely to welcome such a future.

当然,失去行为不端的自由并不总是坏事:剥夺我们让儿童在工厂工作或在餐馆吸烟的自由就是进步的标志。但我们准备好让道德硅警察限制我们的选择了吗?他们可能会做得很好,以至于我们不会注意到他们;但我们中很少有人会欢迎这样的未来。

These issues might seem far-fetched, but they are to some extent already here. AI already has some input into how resources are used in our National Health Service (NHS) here in the UK, for example. If it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. However, we'd be depriving some humans (e.g. senior doctors) of the control they presently enjoy. Since we'd want to ensure that people are treated equally and that policies are fair, the goals of AI would need to be specified correctly.

这些问题看似牵强,但在某种程度上已经存在。例如,在英国,人工智能已经对国民医疗服务系统(NHS)的资源使用方式提供了一些意见。如果让人工智能发挥更大的作用,它可能会比人类管理得更有效率,更符合纳税人和医疗系统使用者的利益。然而,我们将剥夺一些人类(如资深医生)目前享有的控制权。由于我们希望确保人们得到平等对待,确保政策的公平性,因此人工智能的目标需要正确确定。

We have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest. 

我们要应对的是一种新的强大技术--它本身就是一种新的思维方式。为了我们自身的安全,我们需要为这些新思想家指明正确的方向,让他们为我们做出正确的行动。目前尚不清楚这是否可行,但如果可行的话,就需要合作精神,以及抛开私利的意愿。

Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we'll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for.

人们通常认为,一般智能和道德推理都是人类独有的能力。但安全似乎要求我们把它们视为一个整体:如果我们要赋予机器一般智能,我们也需要赋予它们道德权威。那么,人类将何去何从?因此,我们现在就更应该考虑一下目的地,并谨慎对待我们的愿望。

Choose the correct letter, A, B, C, or D.

14 What point does the writer make about AI in the first paragraph?

  • A It is difficult to predict how quickly AI will progress.
  • B Much can be learned about the use of AI in chess machines.
  • C The future is unlikely to see limitations on the capabilities of AI.
  • D Experts disagree on which specialised tasks AI will be able to perform.
显示答案
正确答案: C

15 What is the writer doing in the second paragraph?

  • A explaining why machines will be able to outperform humans
  • B describing the characteristics that humans and machines share
  • C giving information about the development of machine intelligence
  • D indicating which aspects of humans are the most advanced
显示答案
正确答案: A

16 Why does the writer mention the story of King Midas?

  • A to compare different visions of progress
  • B to illustrate that poorly defined objectives can go wrong
  • C to emphasise the need for cooperation
  • D to point out the financial advantages of a course of action
显示答案
正确答案: B

17 What challenge does the writer refer to in the fourth paragraph?

  • A encouraging humans to behave in a more principled way
  • B deciding which values we want AI to share with us
  • C creating a better world for all creatures on the planet
  • D ensuring AI is more human-friendly than we are ourselves
显示答案
正确答案: D

18 What does the writer suggest about the future of AI in the fifth paragraph?

  • A The safety of machines will become a key issue.
  • B It is hard to know what impact machines will have on the world.
  • C Machines will be superior to humans in certain respects.
  • D Many humans will oppose machines having a wider role.
显示答案
正确答案: C

19 Which of the following best summarises the writer's argument in the sixth paragraph?

  • A More intelligent machines will result in greater abuses of power.
  • B Machine learning will share very few features with human learning.
  • C There are a limited number of people with the knowledge to program machines.
  • D Human shortcomings will make creating the machines we need more difficult.
显示答案
正确答案: D

考生贡献解析

点击查看题目解析

暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
完善解析
保存解析
取消
保存成功!

题目讨论

如果对题目有疑问,欢迎来提出你的问题,热心的小伙伴会帮你解答。

如何高效搞定此篇文章?

Living with artificial intelligence

马上练习