Test 04-Passage 3:Attitudes towards Artificial Intelligence 纠错
查看听力原文 关闭显示原文
显示译文

27

[insert:27]

AArtificial intelligence (AI) can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

A. 人工智能(AI)已经可以预测未来。警察正在用它来绘制犯罪可能发生的时间和地点。医生可以利用它来预测病人何时最有可能心脏病发作或中风。研究人员甚至试图赋予AI想象力,以便它能计划意想不到的事情。

生活中的许多决策都需要好的预测,而AI几乎总比我们更善于预测。然而,尽管所有这些技术都在进步,我们似乎仍对AI预测极度缺乏信心。最近的案例表明,人们不喜欢依赖A1,而是更愿意信任人类专家,即使这些专家错了。

Many decisions in our lives require a good forecast, and AI is almost always better at forecasting than we are. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don`t like relying on AI and prefer to trust human experts, even if these experts are wrong.

如果我们希望AI真正造福于人类,我们需要找到方法让人们信任它。为此,我们要理解为什么人们一开始就不愿意信任AI。

[insert:28]

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

B. 以技术行业巨头IBM的一个超级计算机项目沃森肿瘤计划为例。他们向癌症医生推广项目的尝试是场公关灾难。AI承诺就12种占世界病例80%的癌症提供高质量的治疗建议。但当医生初次与沃森互动时,他们发现自己的处境相当困难。一方面,如果沃森提供与自己观点一致的治疗指导,医生们认为沃森的建议没有多大意义。超级计算机只是给出了他们已经知道的信息,这些建议并没有改变实际治疗。

另一方面,如果沃森提出的建议与专家的意见相矛盾,医生通常会得出沃森没有能力的结论。这台机器无法解释为什么它的治疗是合理的,因为机器的算法太复杂了,人类无法完全理解。因此,这引起了更多的怀疑和不信任,导致许多医生忽略了看似极不寻常的AI建议,并坚持自己的专业知识。

28

[insert:29]

BTake the case of Watson for Oncology, one of technology giant IBM`s supercomputer programs. Their attempt to promote this program to cancer doctors was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world`s cases. But when doctors first interacted with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much point in Watson`s recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment.

C. 这只是人们对AI缺乏信心和不愿接受AI提供的信息的一个例子。对他人的信任往往基于我们对他们的思考方式及可靠性的了解。这有利于创建一种心理上的安全感。而AI对大多数人来说仍然非常新鲜和陌生。即使技术上可以解释(虽然情况并非总是如此),但AI的决策过程通常对大多数人来说太难理解。与我们不理解的东西互动会产生焦虑,并给人们一种我们正在失去控制的感觉。

许多人也根本不熟悉太多关于AI实际工作的实例因为AI一般在后台运行。相反,他们敏锐地意识到AI出错的清况。尴尬的AI失败受到媒体的过度关注,强调我们不能依赖技术这一信息。机器学习不是万无一失的,部分原因在设计他们的人并非万无一失。

On the other hand, if Watson generated a recommendation that contradicted the experts` opinion, doctors would typically conclude that Watson wasn`t competent. And the machine wouldn`t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more suspicion and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

[insert:30]

D. 人们对AI的感觉根深蒂固。在最近的一次实验中来自不同背景的人观看了关于AI的各种科幻电影然后被询问了有关日常生活中的自动化的问题。研究发现,无论他们所看的电影是从正面还是负面角度来描述AI,仅仅是一部有关未来科技的电影也会使态度两极分化。乐观者对AI的热情变得更加极端,怀疑论者也变得更加谨慎。

29

这表明人们以有偏见的方式使用关于AI的相关证据来支持他们现有的态度,这种根深蒂固的人类倾向被称为“确认偏见”。随着AI在媒体和娱乐领域的代表性越来越高,它可能导致在AI中受益的人和拒绝AI的人产生社会分裂。更贴切的是,拒绝接受AI提供的优势可能会使一大群人处于严重不利地位。

CThis is just one example of people`s lack of confidence in AI and their reluctance to accept what AI has to offer. Trust in other people is often based on our understanding of how others think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. Even if it can be technically explained (and that`s not always the case), AI`s decision-making process is usually too difficult for most people to comprehend. And interacting with something we don`t understand can cause anxiety and give us a sense that we`re losing control.

[insert:31]

E. 幸运的是 ,关于如何提升对AI的信任度我们已经有了一些想法。前面提到过的研究表明,只要以前有在AI方面的经验,就可以显著改善人们对这项技术的看法。还有证据表明,人们使用互联网等其他科技越多,就会越信任它们。

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong. Embarrassing AI failures receive a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren`t.

另外一种解决方案可能是透露更多有关AI使用的算法及其服务目的的信息。一些知名社交媒体公司和网上商城已经发布了有关政府要求和监控披露的透明度报告。对AI采取类似做法可以帮助人们更好地了解算法决策的方式。

[insert:32]

30

F. 研究表明,允许人们对AI决策有一定的控制力也可以提高信任度,并且可以使AI从人类的经验中学习。例如,一项研究表明,当人们被允许可以自由地稍微修改一个算法时,他们会对算法的决策更满意,更可能相信它的优越性,将来就更可能会使用它。

DFeelings about AI run deep. In a recent experiment, people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life. It was found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants` attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.

我们不需要了解AI系统错综复杂的内部运作,但如果人们被赋予某种程度的关于如何实施AI的责任他们将更愿意接受AI进入其生活。

This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as "confirmation bias". As AI is represented more and more in media and entertainment, it could lead to a society split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.

31

EFortunately, we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people`s opinions about the technology, as was found in the study mentioned above. Evidence also suggests the more you use other technologies such as the internet, the more you trust them.

Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve. Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures. A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.

32

FResearch suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience. For example, one study showed that when people were allowed the freedom to slightly modify an algorithm, they felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

We don`t need to understand the intricate inner workings of AI systems, but if people are given a degree of responsibility for how they are implemented, they will be more willing to accept AI into their lives.

Reading Passage 3 has six sections, A-F .

Choose the correct heading for each section from the list of headings below.

Write the correct number, A-H , in boxes 27-32 on your answer sheet.

List of heading
  • A. An increasing divergence of attithdes towards AI
  • B. Reasons why we have more faith in human judgement than in AI
  • C. The superiority of AI projections over those made by humans
  • D. The process by which AI can help us make good decisions
  • E. The advantages of involving users in AI processes
  • F. Widerspread distrust of an AI innovation
  • G. Encouraging openness about how AI functions
  • H. A surprisingly successful AI application
显示答案
正确答案: 27.C   28.F   29.B   30.A   31.G   32.E  

考生贡献解析

点击查看题目解析

暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
暂无解析
完善解析
保存解析
取消
保存成功!

题目讨论

如果对题目有疑问,欢迎来提出你的问题,热心的小伙伴会帮你解答。

如何高效搞定此篇文章?

Attitudes towards Artificial Intelligence

马上练习