http://hunan.hteacher.net 2023-08-07 13:45 湖南教師招聘 [您的教師考試網(wǎng)]
D
There was a time when the major concern with AI safety had been the one evil super intelligence, reflected in the movie “The Terminator”. However, the game “Tacoma” takes a different approach. It assumes that there will be numerous AGI (artificial general intelligence) in the world and that any AGL, even a safely designed one, in the wrong hands at the wrong time could cause live to be lost. That’s the future that a growing number of AI safety experts are worried about.
This is not a new idea. In the book “Engineering a Safer World”, MIT professor Nancy G. Leveson addresses common misunderstandings about safety-critical systems engineering: engineering systems whose failure could lead to human loss. Such safety-critical technologies include aviation, nuclear power, automobiles, heavy chemicals, biotechnology, and, of course, AGI.
So what can be done?
Technology isn’t always the solution. A famous example is the invention of sonic radars (聲波雷達(dá)) that were supposed to help ships detect nearby obstacles, but which only increased the rate of accidents. Why? Captains sailed faster, thinking they could get away with it thanks to the new safety technology.
Instead of technologies, Leveson’s book suggests, we should be making organizational changes. Additionally, Leveson suggests, among many complicated guidelines, organizations should be aware that safety guidelines will inevitably become lax over time. As a consequence, measures should be carried out to prevent potential disasters.
What lessons can we draw from concern with AI safety? The answer may lie in recent disaster narratives, which remind us that, especially in limes like this, we shouldn’t forget the potential for other disasters. Public conscience really does matter. And if we’re all better at thinking about safety we citizens, maybe we really can prevent disasters.
12. Why does the author mentioned “The Terminator” in the first paragraph?
A. To arouse readers’ interest in The Terminator.
B. To introduce the topic of concern with AI safety.
C. To mention the similarity between “The Terminator” and “Tacoma”.
D. To make readers recall the evil super intelligence reflected in the movie.
12.【答案】D
【解析】推理判斷題。根據(jù)第一段第一句“There was a time when the major concern with AI safety had been the one evil super intelligence…”和全文內(nèi)容可知,文章第一段提到《終結(jié)者》這部電影是為了引出對人工智能安全的擔(dān)憂的主題。故選D。
13. Why did the rate of ship accidents still increase after the invention of sonic radars?
A. Because captains seldom used them.
B. Because the radars failed to work properly.
C. Because captains depended on them too much.
D. Because the ships couldn’t detect nearby obstacles.
13.【答案】C
【解析】細(xì)節(jié)理解題。根據(jù)文章第四段最后一句“Captains sailed faster, thinking they could get away with it thanks to the new safety technology”可知,發(fā)明聲學(xué)雷達(dá)后,事故率反而增加了,原因是船長們認(rèn)為有了新的安全技術(shù)就可以逃脫事故,所以航行得更快了。故選C。
14. What does the underlined word “lax” in paragraph 5 refer to?
A. Safe. B. Important.
C. Unreliable. D. Unnecessary.
14.【答案】C
【解析】詞義猜測題。根據(jù)第五段最后一句“As a consequence, measures should be carried out to prevent potential disasters.”可知,應(yīng)采取措施防止?jié)撛诘臑?zāi)害。由此可推測其原因是隨著時間的推移,安全指導(dǎo)方針將不再安全。Safe意為“安全的”;Important意為“重要的”;Unreliable意為“不可靠的”;Unnecessary意為“不必要的”。故選C。
15. Which of the following can be the best title for the text?
A. Disaster prevention Lessons from AI.
B. Safety problems in modern society.
C. Engineering development in modern days.
D. Future applications of artificial intelligence.
15.【答案】A
【解析】主旨大意題。根據(jù)最后一段第一句“What lessons can we draw from concern with AI safety?”及全文內(nèi)容可知,本文主要講述我們?nèi)祟愒谌斯ぶ悄馨踩蝎@得的教訓(xùn)。故選A。
推薦閱讀:
責(zé)任編輯:欣欣
公告啥時候出?
報考問題解惑?報考條件?
報考崗位解惑 怎么備考?
沖刺資料領(lǐng)取?
備考資料預(yù)約
公眾號
視頻號
小紅書
京ICP備16044424號-2京公網(wǎng)安備 11010802023064號 Copyright © 2001-2024 huatu.com 北京中師華圖文化發(fā)展有限公司 版權(quán)所有