不管愿不愿意,不可否认的是,我们离 人工智能时代越来越近。人工智 能将给社会的各个方面带来深刻的变化,而这难 免让人们担忧甚至抵制,就像曾经担忧和抵制转基因食品那样。

 

专家警告:人工智能有可能像转基因食品那样,引发公众的强烈抵制

译者:徐嘉茵&王津雨

校对:马里奥

策划:邹世昌

 

 

Artificial intelligence risks GM-style public backlash, experts warn

专家警告,人工智能有可能像转基因食品那样,引发公众的强烈抵制

 

本文选自 The Guardian | 取经号原创翻译

关注 取经号,回复关键词“外刊”

获取《经济学人》等原版外刊获得方法

 

Researchers say social, ethical and political concerns are mounting and greater oversight is urgently needed

研究人员称,人工智 能引发了日益增加的社会、道德以 及政治方面的担忧,迫切需 要更强有力的监管

 

The emerging field of artificial intelligence (AI) risks provoking a public backlash as it increasingly falls into private hands, threatens people’s jobs, and operates without effective oversight or regulatory control, leading experts in the technology warn.

 人工智能领域(AI)的权威专家指出,AI的兴起 可能引发公众的强烈抵制,因为它 的逐步私有化会威胁人们的工作,运作过 程也缺乏有效监管。

 

At the start of a new Guardian series on AI, experts in the field highlight the huge potential for the technology, which is already speeding up scientific and medical research, making cities run more smoothly, and making businesses more efficient.

在《卫报》刚开始 报道人工智能时,该领域的专家强调,人工智 能技术潜力巨大,它早已 加快了科学和医药方面的研究进度,让城市的运作更有序,商业运转更高效。

 

But for all the promise of an AI revolution, there are mounting social, ethical and political concerns about the technology being developed without sufficient oversight from regulators, legislators and governments. 

尽管人 工智能革命许诺了一个光明的未来,但由于监管者、立法者 和政府的监管力度不足,该技术引发的社会、道德以 及政治方面的忧虑正日益增加。

 

Researchers told the Guardian that:

The benefits of AI might be lost to a GM-style backlash.

A brain drain to the private sector is harming universities.

Expertise and wealth are being concentrated in a handful of firms.

The field has a huge diversity problem.

研究人员在接受《卫报》采访时提到:

人工智 能的优势可能因为像曾经的转基因食品那样遭遇强 烈抵制而无法施展;

人才外 流至私营机构不利于大学发展;

专业技 术和财富集中在少部分企业手中;

该领域 严重缺乏多元性。

 

In October, Dame Wendy Hall, professor of computer science at Southampton University, co-chaired an independent review on the British AI industry. The report found that AI had the potential to add £630bn to the economy by 2035. But to reap the rewards, the technology must benefit society, she said.

今年十月,在针对 英国人工智能行业开展的一项独立检查中,英格兰 南安普敦大学的计算机科学系教授达姆·温蒂·霍尔担任了联合主席。这项研究的报告称,人工智 能拥有巨大潜力,能在2035年底前创造6300亿英镑的经济效益。但她指出,为了获得回报,这项技 术必须能够造福社会。

 

“AI will affect every aspect of our infrastructure and we have to make sure that it benefits us,” she said. “We have to think about all the issues. When machines can learn and do things for themselves, what are the dangers for us as a society? It’s important because the nations that grasp the issues will be the winners in the next industrial revolution.”

“人工智 能将影响我们基础设施的方方面面,因此必 须确保它会给我们带来效益,”霍尔说道。“我们必 须想清楚所有问题。比方说,当机器 能自主学习并为自己服务时,会对人 类社会带来什么危险?这很重要,因为如 果一个国家能够抓住问题所在,就会成 为下一场产业革命的赢家。”

 

Today, responsibility for developing safe and ethical AI lies almost exclusively with the companies that build them. There are no testing standards, no requirement for AIs to explain their decisions, and no organisation equipped to monitor and investigate any bad decisions or accidents that happen.

目前,研发安全的、合乎道 德的人工智能的责任几乎完全由开发人工智能的企业承担。没有测试标准,没有规 定要求人工智能解释它们的决策,也没有 设立配套的组织对糟糕的决策或是发生的意外事件进行监控和调查。

 

“We need to have strong independent organisations, along with dedicated experts and well-informed researchers, that can act as watchdogs and hold the major firms accountable to high standards,” said Kate Crawford, co-director of the AI Now Institute at New York University. “These systems are becoming the new infrastructure. It is crucial that they are both safe and fair.”

“我们需 要设立强有力的独立机构,配备具 有奉献精神的专家以及见多识广的研究员,这些人 能够发挥监管作用,迫使主要的AI企业按照高标准操作,”纽约大学AI Now Institute的联合主任凯特·克劳福德说,“这些监 管系统正在成为AI赖以发 展的新一代基础设施,确保其 安全公平是重中之重。”

 

Many modern AIs learn to make decisions by being trained on massive datasets. But if the data itself contains biases, these can be inherited and repeated by the AI.

许多现 代人工智能机器人通过大量数据集接受训练,从而学 会如何做出决定。但如果 数据本身存在偏见,这些偏 见同样会被人工智能机器人继承,从而重蹈人类的覆辙。

 

Earlier this year, an AI that computers use to interpret language was found to display gender and racial biases. Another used for image recognition categorised cooks as women, even when handed images of balding men. A host of others, including tools used in policing and prisoner risk assessment, have been shown to discriminate against black people.

今年早些时候,人们发 现一项被计算机用于语言翻译的人工智能技术显示出了性别和种族歧视的倾向。另一个 用于图像识别的人工智能将做饭的人归类为女性,尽管提 交的照片上明明是个秃头男子。另有一 些用于警务工作和对犯人进行风险评级的人工智能技术表现出了对黑人的歧视。

 

The industry’s serious diversity problem is partly to blame for AIs that discriminate against women and minorities. At Google and Facebook, four in five of all technical hires are men. The white male dominance of the field has led to health apps that only cater for male bodies, photo services that labelled black people as gorillas and voice recognition systems that did not detect women’s voices. “Software should be designed by a diverse workforce, not your average white male, because we’re all going to be users,” said Hall.

之所以 出现歧视女性和少数族裔的人工智能,一部分 原因应当归咎于行业内部严重的多元化缺失。在谷歌和Facebook公司里,五分之 四的技术人员都是男性。白人男 性主导整个领域导致了以下结果:开发出 来的健康应用只为男性所用、图片服 务工具为黑人贴上大猩猩的标签、声音识 别系统无法探测女性声音。霍尔表示,“应该让 不同的群体来设计软件,而不仅仅是白人男性,因为我 们所有人都将成为使用者。”

 

Poorly tested or implemented AIs are another concern. Last year, a driver in the US died when the autopilot on his Tesla Model S failed to see a truck crossing the highway. An investigation into the fatal crash by the US National Transportation Safety Board criticised Tesla for releasing an autopilot system that lacked sufficient safeguards. The company’s CEO, Elon Musk, is one of the most vocal advocates of AI safety and regulation.

未经有 效测试或使用不当是人工智能面临的又一个问题。去年美 国有一名司机车祸身亡,因为他那款特斯拉S汽车上 的自动驾驶仪未能发现有一辆卡车正在横穿公路。美国国 家安全运输委员会在对此次碰撞事故进行调查后,批评特 斯拉公司发行了这样一个缺乏足够防护作用的自动驾驶系统。该公司的CEO埃隆·马斯克是为人 工智能安全和监管发声最多的拥护者之一。

 

Yet more concerns exist over the use of AI-powered systems to manipulate people, with serious questions now being asked about uses of social media in the run-up to Britain’s EU referendum and the 2016 US election. “There’s a technology arms race going on to see who can influence voters,” said Toby Walsh professor of artificial intelligence at the University of New South Wales and author of a recent book on AI called Android Dreams.

然而,更多的人担忧AI驱动的 系统会被用于操纵人类,社交媒 体在英国退欧公投的准备阶段以及2016年美国 大选中发挥的作用开始受到严肃的质疑。新南威尔士大学AI领域的教授托比·沃尔什(Toby Walsh)近期著有一本关于AI的书,名为《机器人之梦》(Android Dreams)。他指出:“科技界 的军备竞赛正在展开,看谁能 对选民施加影响”。

 

“We have rules on the limits of what you can spend to influence people to vote in particular ways, and I think we’re going to have to have limits on how much technology you can use to influence people.”

“公司能 用特定的方式影响人们的投票选择,但在这 方面的花费是受规定限制的。而且我认为,能用来 影响人们的科技手段的数量也应该受到限制。”

 

Even at a smaller scale, manipulation could create problems. “On a day to day basis our lives are being, to some extent, manipulated by AI solutions,” said Sir Mark Walport, the government’s former chief scientist, who now leads UK Research and Innovation, the country’s new super-research council. “There comes a point at which, if organisations behave in a manner that upsets large swaths of the public, it could cause a backlash.”

即使小 范围的操纵也会带来问题。英国政 府前首席科学家,目前正 在领导英国新兴的顶级科研委员会的马克·沃尔波特(Mark Walport)提出:“在某种程度上,我们每 天的日常生活都在被各种AI的解决方案所操纵,总有一天,如果某 些组织的行为让广大民众感到不安,就会遭到强烈抵制”。

 

Leading AI researchers have expressed similar concerns to the House of Lords AI committee, which is holding an inquiry into the economic, ethical and social implications of artificial intelligence. Evidence submitted by Imperial College London, one of the major universities for AI research, warns that insufficient regulation of the technology “could lead to societal backlash, not dissimilar to that seen with genetically modified food, should serious accidents occur or processes become out of control”.

AI的前沿 研究者对上议院AI委员会 表达了类似的担忧。上议院 人工智能委员会正在对AI的经济、伦理和 社会影响进行调查。从事AI研究的 重点大学之一的伦敦帝国理工学院提交了相关证据,警告人 们对科技缺乏充分监管“会导致 公众的强烈抵制,不亚于 我们看到的对于转基因食物的抵制,这样可 能会出现严重的事故,科技发 展进程或许将会失控。”

 

Scientists at University College London share the concern about an anti-GM-style backlash, telling peers in their evidence: “If a number of AI examples developed badly, there could be considerable public backlash, as happened with genetically modified organisms.”

伦敦大 学学院的科学家们同样关注“反转基因”式的强烈抵制,并向同 行们阐述了他们的依据:“如果许多AI的事例都变得很糟糕,可能会 遭到公众相当强烈的抵制,就像转 基因食物所遭遇的那样。”

 

But the greatest impact on society may be AIs that work well, scientists told the Guardian. The Bank of England’s chief economist has warned that 15m UK jobs could be automated by 2035, meaning large scale re-training will be needed to avoid a sharp spike in unemployment. The short-term disruption could spark civil unrest, according to Maja Pantic, professor of affective and behavioural computing at Imperial, as could rising inequality driven by AI profits flowing to a handful of multinational companies.

但是,科学家们向卫报表示,给社会 造成最大影响的可能是那些运行良好的AI。英国央 行首席经济学家已提出警告,截至2035年,英国1500万份工 作将会被自动化机器所取代,这意味 着政府需要进行大规模的再就业培训,以防失 业人数急剧增加。按照伦 敦帝国理工学院情绪和行为计算学教授马亚·潘蒂奇(Maja Pantic)的观点,这种短 期的混乱会激起社会动荡,同时由AI带来的 效益将会流向为数不多的跨国企业,这将会 导致不平等现象加剧。

 

Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence, said that although technology often benefited society, it did not always do so equitably. “Recent technological advances have been leading to a lot more concentration of wealth,” he said. “I certainly do worry about the effects of AI technologies on wealth concentration and inequality, and how to make the benefits more inclusive.”

人工智 能发展协会主席苏巴拉奥·卡姆巴哈帕蒂(Subbarao Kambhampati)指出,尽管科 技常常使社会受益,但它并 非总能公平地带来益处。“近年科 技发展已导致了更大程度的财富集中,”他说道,“我当然担忧AI技术在 财富集中和不平等方面形成的影响,并关心如何能使AI产生更为广泛的效益”。

 

The explosion of AI research in industry has driven intense demand for qualified scientists. At British universities, PhD students and postdoctoral researchers are courted by tech firms offering salaries two to five times those paid in academia. While some institutions are coping with the hiring frenzy, others are not. In departments where demand is most intense, senior academics fear they will lose a generation of talent who would traditionally drive research and teach future students.

工业领域中对AI研究的 激增推动了对具备资质的科学家的强烈需求。在英国大学中,博士学 位的学生和博士后研究人员受到科技公司青睐,这些科 技公司提供的薪水是学术界的2-5倍。虽然有 些研究机构正在设法应对这场招聘狂潮,但并非 每所高校都能如此。在一些 需求最为强烈的大学院系,资深学 者们害怕他们会失去一代人才。根据大学的学术传统,推动研 究发展并在将来给学生授课的正是这批人。

 

According to Pantic, the best talent from academia is being sucked up by four major AI firms, Apple, Amazon, Google and Facebook. She said the situation could lead to 90% of innovation being controlled by the companies, shifting the balance of power from states to a handful of multinational companies.

根据潘蒂奇的观点,学术界 最优秀的人才正在被四大AI企业所吸纳——苹果、亚马逊、谷歌和脸书。她指出,这一情形会导致90%的创新 力量集中在这些企业手中,并将权 力的天平从国家偏移到少数跨国企业一头。

 

Walport, who oversees almost all public funding of UK science research, is cautious about regulating AI for fear of hampering research. Instead he believes AI tools should be carefully monitored once they are put to use so that any problems can be picked up early.

沃尔波 特负责监督几乎所有供英国科研的公共基金,他对于控制AI持谨慎态度,担心这 样做会妨碍科学研究。不过他认为,一旦使用了AI工具,就应对 其实施全面监控,这样任 何问题都能及早发现。

 

“If you don’t continuously monitor, you’re in danger of missing things when they go wrong,” he said. “In the new world, we should surely be working towards continuous, real-time monitoring so that one can see if anything untoward or unpredictable is happening as early as possible.”

他说:“如果不进行持续监控,当AI出错时,就会面 临丢失东西的危险。在如今这个新世界,我们应致力于持续、实时的监控,这样才 能尽早发现任何不确定或不可预测的事情。”

 

That might be part of the answer, according to Robert Fisher, professor of computer vision at Edinburgh University. “In theory companies are supposed to have liability, but we’re in a grey area where they could say their product worked just fine, but it was the cloud that made a mistake, or the telecoms provider, and they all disagree as to who is liable,” he said.

爱丁堡 大学计算机视觉教授罗伯特·费舍尔(Robert Fisher)认为,上述观 点可能只是答案的一部分。他提出:“理论上 公司应该负有责任,但我们 正处于一个灰色地带,公司会 宣称它们的产品运行得很好,只是云 端或是电信运营商出差错了,而且他 们都不能在由谁承担责任的问题上达成一致。”

 

“We are clearly in brand new territory. AI allows us to leverage our intellectual power, so we try to do more ambitious things,” he said. “And that means we can have more wide-ranging disasters.”

“我们显 然已经进入了一个全新的领域。AI如同杠杆,放大了 人类的智能力量,也让我 们更加踌躇满志,雄心勃勃。”他说道,“同时这也意味着,我们可 能会遭受更多大规模的灾难。”